mysql

Webm: MySQL database performance web monitor

September 28, 2014 Architect, MYSQL, performance No comments , ,

You can download webm from github: https://github.com/ylouis83/webm

webm: mysql web key performance monitor

webm is a tool that display key value graph on website and webm was developed by javascript and mysqlmon ( mysql data collection tool wrote by AnySQL)

Environment need:

Linux version 5+ php5 Apache server

You can also run this tool on windows platform (install xampp )

and webo will come soon ( oracle web monitor tools 🙂 )

login webm system
webm_login

mysql redo/binlog size per 10 seconds
webm1

mysql insert/update (little delete) per 10 seconds
webm2

mysql select (QPS) per 10 seconds
webm3

Talk about some MySQL HA architecture

August 12, 2014 Architecture, MYSQL No comments ,

Some tips for MySQL architecture

read : JKDB-mysql-arc

Advanced Pagination for MySQL

July 28, 2014 MYSQL, performance 2 comments

看到叶金荣的一篇关于mysql分页的文章,结合雅虎之前发的一篇PDF 谈谈自己的看法

在叶子的文章里谈到了使用inner join 从而减少了对page的扫描也就是减少了所谓的回表 例如:

SELECT * FROM `t1` INNER JOIN ( SELECT id FROM `t1`ORDER BY id DESC LIMIT 935500,10) t2 USING (id)

通过直接对id的操作 而不是整张表的扫描 通过id 的join 抓出符合条件id 然后通过ID 再去做数据的抓取。这样就避免了对不需要的页面的扫描。

不过这样也不是最佳的方法 还可以通过对id 的 range更加缩小范围 例如:

我们要分100条记录分一页 可以写成

$page_size=100 select * from t where id > 99 order by id asc limit $page_size ; select * from  t where id >199 order by id asc limit $page_size;

尽量避免limit M,N 这种写法 mysql在对M值很大 而offset很小的时候的处理方式很不人性化 ,所以尽量不要使用offset来取得特定行数。

在这里有一个问题 比如根据不是唯一索引的column分页 那么可能存在一个问题,例如一个列column1存在11个key=100的值 那么你使用limit N 之后取到的min value还是同一个值
这种情况如何处理? 给个例子:

比如要每10条记录分一页

select * from t  order by column1 desc  limit 10

注意这里取到的min value还是100 (11个连续的100) 对下面的分页会产生影响,如何处理?

雅虎给出的方案非常好 取一个extra的column 例如PK 或者unique index key 例如:

select * from t  order by column1 desc, id desc   limit 10 -- 第一个页
select * from t  where column1 <=minvalue_col1 and (id < minvalue_id or column1 < minvalue_col1) limit 10  ---第二个页

这样就确保了唯一性 保证了每页的数据不会重复 思想就是通过add一个唯一的extra 取得这个extra的边界值 结合range column来进行分页。

这个SQL 还能被优化成:

SELECT m2.* FROM t m1, t m2  WHERE m1.id = m2.id  AND m1.column1 <= minvalue_col1
AND (m1.id < minvalue_id OR m1.column1 < minvalue_col1)  ORDER BY m1.column1 DESC, m1.id DESC  LIMIT 10; 

核心思想: 通过extra过滤 配合ID扫描 避免大量的回表操作 这样就达到了要取多少条 就扫描多少条 (in page)

MySQL replication case 一则

September 10, 2013 MYSQL, replication No comments

最近同事处理了一则mysql复制错误.发出来参考下

MYSQL同步出错,报错信息如下:

Last_Errno: 1267
Last_Error: Error 'Illegal mix of collations (utf8_general_ci,COERCIBLE) and (latin1_swedish_ci,IMPLICIT) for operation 'concat'' on query. Default database: 'inshop_app'. Query: 'INSERT INTO inshop_app.app_sms_order_info (ORDER_CODE, ORDER_TIME, ORDER_AMOUNT, ORDER_CONTENT, BUY_NUM, ORDER_STATUS, MERCHANT_ID, CREATE_TIME, UPDATE_TIME, APP_TYPE, pay_time, remark) 
        VALUES( NAME_CONST('my_order_code',_utf8'SBY130830010708_F0' COLLATE 'utf8_general_ci'),NOW(),'0','1次', NAME_CONST('my_sms_num',1125000),'1', NAME_CONST('my_merchant_id',10708),NOW(),NOW(),'2',NOW(),CONCAT ('钻展赠送:', NAME_CONST('my_sms_num',1125000)))'

出错原因分析:

此SQL在Master上执行时是这样的

INSERT INTO inshop_app.app_sms_order_info (ORDER_CODE, ORDER_TIME, ORDER_AMOUNT, ORDER_CONTENT, BUY_NUM, ORDER_STATUS, MERCHANT_ID, CREATE_TIME, UPDATE_TIME, APP_TYPE, pay_time, remark) 
        VALUES( 'SBY130830010708_F0',NOW(),'0','1次', 1125000,'1', 10708,NOW(),NOW(),'2',NOW(),CONCAT ('钻展赠送:', 1125000))

该SQL本身是没问题的,执行成功,但是MYSQL在记录BINLOG的时候,会对常量用NAME_CONST()函数进行“标识”
同步的报错就出现在这个地方

CONCAT ('钻展赠送:', NAME_CONST('my_sms_num',1125000))

其中,’钻展赠送:’是UTF8字符集,NAME_CONST(‘my_sms_num’,1125000)得到的数值型常量被自动转型为LATIN1字符集,外层的CONCAT()函数不支持二种不同字符集进行连接,于是报错

以下测试可验证此分析:

无NAME_CONST()函数标识常量时,即如同在Master上执行时,成功

09:29:06 inshop_app> select concat('钻展赠送',123);
+----------------------------+
| concat('钻展赠送',123)     |
+----------------------------+
| 钻展赠送123                |
+----------------------------+
1 row in set (0.00 sec)

有NAME_CONST()函数标识常量时,即如同在Slave上执行时,失败

09:25:17 inshop_app> select concat('钻展赠送',name_const('colname',123));
ERROR 1267 (HY000): Illegal mix of collations (utf8_general_ci,COERCIBLE) and (latin1_swedish_ci,IMPLICIT) for operation 'concat'

报错与同步是一样的错误

什么情况下MySQL会自动加上NAME_CONST函数

测试1: 直接insert

11:27:32 test> insert into lengzhenguo_mha(c3,c4) values(1,'a'),('2','b');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

BINLOG中的内容

#130909 11:28:35 server id 2009  end_log_pos 469        Query   thread_id=10    exec_time=0     error_code=0
SET TIMESTAMP=1378697315/*!*/;
insert into lengzhenguo_mha(c3,c4) values(1,'a'),('2','b')
/*!*/;

测试2: 简单的存储过程

13:16:42 test> create procedure p_test()
    -> begin
    -> insert into lengzhenguo_mha(c3,c4) values(10,'abc'),('20','xyz');
    -> commit;
    -> end
    -> $
Query OK, 0 rows affected (0.00 sec)

13:17:38 test> call p_test();
Query OK, 0 rows affected (0.00 sec)

BINLOG中的内容

#130909 13:18:21 server id 2009  end_log_pos 328        Query   thread_id=12    exec_time=0     error_code=0
use `test`/*!*/;
SET TIMESTAMP=1378703901/*!*/;
insert into lengzhenguo_mha(c3,c4) values(10,'abc'),('20','xyz')
/*!*/;

测试3:带参数的存储过程 类似bind value

13:22:43 test> CREATE procedure p_test_2 (i bigint, j varchar(30))
    -> begin
    ->  insert into lengzhenguo_mha(c3,c4) values(i,j);
    -> commit;
    -> end
    -> $
Query OK, 0 rows affected (0.00 sec)

13:23:16 test> call p_test_2(100,'dba');
Query OK, 0 rows affected (0.00 sec)

13:25:10 test> call p_test_2('500','dba');
Query OK, 0 rows affected (0.00 sec)

BINLOG中的内容
#130909 13:23:32 server id 2009 end_log_pos 612 Query thread_id=12 exec_time=0 error_code=0

SET TIMESTAMP=1378704212/*!*/;
insert into lengzhenguo_mha(c3,c4) values( NAME_CONST('i',100), NAME_CONST('j',_latin1'dba' COLLATE 'latin1_swedish_ci'))
/*!*/;
#130909 13:25:15 server id 2009  end_log_pos 1226       Query   thread_id=12    exec_time=0     error_code=0
SET TIMESTAMP=1378704315/*!*/;
insert into lengzhenguo_mha(c3,c4) values( NAME_CONST('i',500), NAME_CONST('j',_latin1'dba' COLLATE 'latin1_swedish_ci'))
/*!*/;

注意:’500’在写入Binlog时,已经被转换成数值型了

目前已知的解决方法:

方法1:不要直接使用数值,直接给予字符串,建议使用此方法

09:25:27 inshop_app> select concat('钻展赠送',name_const('colname','123'));
+----------------------------------------------------+
| concat('钻展赠送',name_const('colname','123'))     |
+----------------------------------------------------+
| 钻展赠送123                                        |
+----------------------------------------------------+
1 row in set (0.00 sec)

方法2:先进行类型转换

09:56:32 inshop_app> select concat('钻展赠送',convert(name_const('colname',123) using utf8));
+----------------------------------------------------------------------+
| concat('钻展赠送',convert(name_const('colname',123) using utf8))     |
+----------------------------------------------------------------------+
| 钻展赠送123                                                          |
+----------------------------------------------------------------------+
1 row in set (0.00 sec)

MySQL InnoDB的一些非常规恢复

July 9, 2013 Internals, MYSQL 2 comments

最近关于下厨房的数据恢复,讨论的很火热,乘机测试了把Percona Data Recovery Tool for InnoDB工具,并且事实证明这款工具对一些操作的恢复,还是比较困难的。

我们以一张小表为例做一个测试:

mysql> select * from  customer;
+-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+
| customer_id | store_id | first_name | last_name | email             | address_id | active | create_date         | last_update         |
+-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+
|           1 |        1 | liu        | yang      | www.yihaodian.com |        141 |      1 | 2013-07-09 15:34:23 | 2013-07-09 15:34:23 |
+-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+
1 row in set (0.00 sec)

mysql> TRUNCATE TABLE customer;
Query OK, 0 rows affected (0.01 sec)

mysql> exit
[root@db-42 tmp]# wget https://launchpad.net/percona-data-recovery-tool-for-innodb/trunk/release-0.5/+download/percona-data-recovery-tool-for-innodb-0.5.tar.gz

..


2013-07-09 15:41:03 (11.3 KB/s) - `percona-data-recovery-tool-for-innodb-0.5.tar.gz' saved [1960195/1960195]


[root@db-42 tmp]# tar -zxvf percona-data-recovery-tool-for-innodb-0.5.tar.gz

[root@db-42 tmp]# cd percona-data-recovery-tool-for-innodb-0.5/mysql-source
[root@db-42 mysql-source]# ./configure

编译生成page_parser和constraints_parser工具 (到根目录下直接make即可)这里注意create_defs.pl脚本需要依赖DBD,DBI,安装过程中可能出现如下错误:

[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./create_defs.pl --host=localhost --user=root --password=yihaodian --db=test --table=customer  > include/table_defs.h
perl: symbol lookup error: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/auto/DBD/mysql/mysql.so: undefined symbol: mysql_init

通过如下方式解决:

[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# mkdir /tmp/mysql-static
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# cp /usr/lib64/mysql/*.a /tmp/mysql-static

删除DBD目录
重新解压
perl Makefile.PL –libs=”-L/tmp/mysqldbd-install -lmysqlclient”
重新编译即可解决

[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# perl test.pl 
hello world

下面继续,获取这个表的定义:

[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./create_defs.pl --host=localhost --user=root --password=yihaodian --db=test --table=customer  > include/table_defs.h
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# cat include/table_defs.h
#ifndef table_defs_h
#define table_defs_h

// Table definitions
table_def_t table_definitions[] = {
	{
		name: "customer",
		{
			{ /* smallint(5) unsigned */
				name: "customer_id",
				type: FT_UINT,
				fixed_length: 2,

				has_limits: FALSE,
				limits: {
					can_be_null: FALSE,
					uint_min_val: 0,
					uint_max_val: 65535
				},

				can_be_null: FALSE
			},
			{ /*  */
				name: "DB_TRX_ID",
				type: FT_INTERNAL,
				fixed_length: 6,

				can_be_null: FALSE
			},
			{ /*  */
				name: "DB_ROLL_PTR",
				type: FT_INTERNAL,
				fixed_length: 7,

				can_be_null: FALSE
			},
			{ /* tinyint(3) unsigned */
				name: "store_id",
				type: FT_UINT,
				fixed_length: 1,

				has_limits: FALSE,
				limits: {
					can_be_null: FALSE,
					uint_min_val: 0,
					uint_max_val: 255
				},

				can_be_null: FALSE
			},
			{ /* varchar(45) */
				name: "first_name",
				type: FT_CHAR,
				min_length: 0,
				max_length: 135,

				has_limits: FALSE,
				limits: {
					can_be_null: FALSE,
					char_min_len: 0,
					char_max_len: 135,
					char_ascii_only: TRUE
				},

				can_be_null: FALSE
			},
			{ /* varchar(45) */
				name: "last_name",
				type: FT_CHAR,
				min_length: 0,
				max_length: 135,

				has_limits: FALSE,
				limits: {
					can_be_null: FALSE,
					char_min_len: 0,
					char_max_len: 135,
					char_ascii_only: TRUE
				},

				can_be_null: FALSE
			},
			{ /* varchar(50) */
				name: "email",
				type: FT_CHAR,
				min_length: 0,
				max_length: 150,

				has_limits: FALSE,
				limits: {
					can_be_null: TRUE,
					char_min_len: 0,
					char_max_len: 150,
					char_ascii_only: TRUE
				},

				can_be_null: TRUE
			},
			{ /* smallint(5) unsigned */
				name: "address_id",
				type: FT_UINT,
				fixed_length: 2,

				has_limits: FALSE,
				limits: {
					can_be_null: FALSE,
					uint_min_val: 0,
					uint_max_val: 65535
				},

				can_be_null: FALSE
			},
			{ /* tinyint(1) */
				name: "active",
				type: FT_INT,
				fixed_length: 1,

				has_limits: FALSE,
				limits: {
					can_be_null: FALSE,
					int_min_val: -128,
					int_max_val: 127
				},

				can_be_null: FALSE
			},
			{ /* datetime */
				name: "create_date",
				type: FT_DATETIME,
				fixed_length: 8,

				can_be_null: FALSE
			},
			{ /* timestamp */
				name: "last_update",
				type: FT_TIMESTAMP,
				fixed_length: 4,

				can_be_null: FALSE
			},
			{ type: FT_NONE }
		}
	},
};

提取需要的页

[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# cp /var/lib/mysql/liuyang/customer.ibd  /tmp/
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./page_parser -5 -f /tmp/customer.ibd 
Opening file: /tmp/customer.ibd:
2051		ID of device containing file
1798748		inode number
33184		protection
1		number of hard links
0		user ID of owner
0		group ID of owner
0		device ID (if special file)
147456		total size, in bytes
4096		blocksize for filesystem I/O
296		number of blocks allocated
1373360322	time of last access
1373360322	time of last modification
1373360322	time of last status change
147456	Size to process in bytes
104857600	Disk cache size in bytes

实际上对这个表并不能够挖掘出数据,也就是说无法挖掘出truncate过的数据,并且如果是drop操作在没有备份(冷备份orSlave)的情况下获取表的定义都成了问题。所以上面的操作可以获取表定义 但是无法unload出数据。

在使用innodb monitor之后发现并不能从21号子目录中抽取出数据:

TABLE: name liuyang/customer, id 20, flags 1, columns 12, indexes 4, appr.rows 2423
  COLUMNS: customer_id: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 2; store_id: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 1; first_name: DATA_VARMYSQL DATA_NOT_NULL len 135; last_name: DATA_VARMYSQL DATA_NOT_NULL len 135; email: DATA_VARMYSQL len 150; address_id: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 2; active: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 1; create_date: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 8; last_update: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 4; DB_ROW_ID: DATA_SYS prtype 256 len 6; DB_TRX_ID: DATA_SYS prtype 257 len 6; DB_ROLL_PTR: DATA_SYS prtype 258 len 7; 
  INDEX: name PRIMARY, id 21, fields 1/11, uniq 1, type 3  ----- extract from here
   root page 3, appr.key vals 2423, leaf pages 11, size pages 12
   FIELDS:  customer_id DB_TRX_ID DB_ROLL_PTR store_id first_name last_name email address_id active create_date last_update
  INDEX: name idx_fk_store_id, id 22, fields 1/2, uniq 2, type 0
   root page 4, appr.key vals 1, leaf pages 2, size pages 3
   FIELDS:  store_id customer_id
  INDEX: name idx_fk_address_id, id 23, fields 1/2, uniq 2, type 0
   root page 5, appr.key vals 1, leaf pages 2, size pages 3
   FIELDS:  address_id customer_id
  INDEX: name idx_last_name, id 24, fields 1/2, uniq 2, type 0
   root page 6, appr.key vals 1, leaf pages 4, size pages 5
   FIELDS:  last_name customer_id
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./constraints_parser  -5 -f  pages-1373360911/FIL_PAGE_INDEX/0-21/0-00000003.page 
LOAD DATA INFILE '/tmp/percona-data-recovery-tool-for-innodb-0.5/dumps/default/customer' REPLACE INTO TABLE `customer` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'customer\t' (customer_id, store_id, first_name, last_name, email, address_id, active, create_date, last_update);

没有任何数据

同样我们重新创建一张表,插入数据,不做truncate是可以抽取出数据的。

mysql> use liuyang
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> 
mysql> insert into  customer (store_id,first_name,last_name,email,address_id,create_date) values (1,'liu','yang','www.yihaodian.com','141',sysdate());
Query OK, 1 row affected (0.00 sec)

mysql> commit;
Query OK, 0 rows affected (0.00 sec)

。。。。。。


mysql> insert into  customer (store_id,first_name,last_name,email,address_id,create_date) values (1,'liu','yang','www.yihaodian.com','141',sysdate());
Query OK, 1 row affected (0.00 sec)

mysql> commit;
Query OK, 0 rows affected (0.00 sec)
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./page_parser -5 -f /var/lib/mysql/liuyang/customer.ibd 
Opening file: /var/lib/mysql/liuyang/customer.ibd:
2054		ID of device containing file
1867914		inode number
33200		protection
1		number of hard links
203		user ID of owner
156		group ID of owner
0		device ID (if special file)
425984		total size, in bytes
4096		blocksize for filesystem I/O
840		number of blocks allocated
1373361248	time of last access
1373362171	time of last modification
1373362171	time of last status change
425984	Size to process in bytes
104857600	Disk cache size in bytes

[root@db-42 0-21]# ls -lrt
total 192
-rw-r--r-- 1 root root 16384 Jul  9 17:30 9-00000012.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 8-00000011.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 7-00000010.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 6-00000009.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 5-00000008.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 4-00000007.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 21-00000024.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 20-00000023.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 16-00000019.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 13-00000016.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 10-00000013.page
-rw-r--r-- 1 root root 16384 Jul  9 17:30 0-00000003.page
[root@db-42 0-21]# cat *.page  > /tmp/temp.page
[root@db-42 0-21]# cd ..
[root@db-42 FIL_PAGE_INDEX]# cd ..
[root@db-42 pages-1373362201]# cd ..
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./constraints_parser  -5 -f /tmp/temp.page 
customer	1299	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1300	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1301	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1302	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1303	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1304	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1305	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"
customer	1306	1	"liu"	"yang"	"www.yihaodian.com"	141	1	"2013-07-09 17:29:19"	"2013-07-09 09:29:19"

.............

通过LOAD语句直接将文件导入到MySQL

mysql> show tables;
+-------------------+
| Tables_in_liuyang |
+-------------------+
| customer          |
+-------------------+
1 row in set (0.00 sec)

mysql> truncate table  customer;
Query OK, 0 rows affected (0.00 sec)

mysql> LOAD DATA INFILE '/tmp/customer.csv' REPLACE INTO TABLE `customer` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'customer\t' (customer_id, store_id, first_nam 
e, last_name, email, address_id, active, create_date, last_update);
Query OK, 2306 rows affected (0.07 sec)
Records: 2306  Deleted: 0  Skipped: 0  Warnings: 0

mysql> select count(*) from customer;
+----------+
| count(*) |
+----------+
|     2306 |
+----------+
1 row in set (0.01 sec)

mysql> select * from customer limit 1;
+-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+
| customer_id | store_id | first_name | last_name | email             | address_id | active | create_date         | last_update         |
+-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+
|           1 |        1 | liu        | yang      | www.yihaodian.com |        141 |      1 | 2013-07-09 17:28:00 | 2013-07-09 09:28:00 |
+-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+
1 row in set (0.00 sec)

可以看到在正常情况下可以顺利的unload出数据,这为一些非极端的环境下的恢复提供了很多的帮助。

Ref:如何从MySQL/InnoDB数据文件中的恢复数据
Ref:MySQL数据库InnoDB数据恢复工具使用总结

Recent MySQL performance Test detial

June 14, 2013 MYSQL, performance 2 comments

最近team基于mysql的性能基准测试,包含多种场景

mysql sync-binlog 导致 poor IO performance

January 29, 2013 MYSQL, performance No comments

mysql 数据库 poor IO performance.从DELL 815(6disk raid 5,oel5.5) 迁移至 HP DL380 (14disk raid 1+0,rhl5.8)

当时的IO 表现

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.55    0.00    1.30    0.25    0.00   93.90

Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
cciss/c0d0        0.00     5.00  0.00  5.00     0.00    40.00    16.00     0.00    0.30   0.30   0.15
cciss/c0d0p1 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p2 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p3 
               0.00     5.00  0.00  5.00     0.00    40.00    16.00     0.00    0.30   0.30   0.15
cciss/c0d0p4 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p5 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p6 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d1        0.00  1199.00  4.50 2289.00    66.00 13952.00    12.22     0.88    0.38   0.37  85.35
cciss/c0d1p1 
               0.00  1199.00  4.50 2289.00    66.00 13952.00    12.22     0.88    0.38   0.37  85.35

Average:          DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
Average:     dev104-0     34.67      0.00  20727.64    597.80      0.02      0.51      0.12      0.40
Average:     dev104-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     dev104-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     dev104-3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     dev104-4      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:     dev104-5     34.67      0.00  20727.64    597.80      0.02      0.51      0.12      0.40
Average:     dev104-6      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    dev104-16   2676.38   7654.27  29005.03     13.70      2.47      0.92      0.37     98.04
Average:    dev104-17   2676.38   7654.27  29005.03     13.70      2.47      0.92      0.37     98.04

%util 接近90%. 临时修改sync_binlog=0 问题解决 (源库sync_binlog=1 没有出现这种问题)

Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
cciss/c0d0        0.00  9094.00  0.00 136.00     0.00 36920.00   542.94     0.05    0.35   0.10   1.40
cciss/c0d0p1 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p2 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p3 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p4 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d0p5 
               0.00  9094.00  0.00 136.00     0.00 36920.00   542.94     0.05    0.35   0.10   1.40
cciss/c0d0p6 
               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
cciss/c0d1        0.00   575.00 25.00 18.00   400.00  2372.00   128.93     0.17    4.03   3.81  16.40
cciss/c0d1p1 
               0.00   575.00 25.00 18.00   400.00  2372.00   128.93     0.17    4.03   3.81  16.40
               
               
05:43:19 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
05:43:21 PM  dev104-0      1.01      0.00     72.36     72.00      0.00      0.00      0.00      0.00
05:43:21 PM  dev104-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
05:43:21 PM  dev104-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
05:43:21 PM  dev104-3      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
05:43:21 PM  dev104-4      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
05:43:21 PM  dev104-5      1.01      0.00     72.36     72.00      0.00      0.00      0.00      0.00
05:43:21 PM  dev104-6      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
05:43:21 PM dev104-16     52.76   1165.83   5089.45    118.55      0.25      4.77      4.68     24.67
05:43:21 PM dev104-17     52.76   1165.83   5089.45    118.55      0.25      4.77      4.68     24.67

%util,tps,均大幅下降。应用类型为快速插入commit 操作。

测试对比DELL815, HP DL380 IO 表现:

DELL PowerEdge R815

[root@db-2-2 iozone]# iozone -i 0 -i 1 -r 4096 -s 2G -Recb /data/mysql/iozone/log.xls -t 2 -C |tee /data/mysql/iozone/iozone.log
       Iozone: Performance Test of File I/O
               Version $Revision: 3.315 $
              Compiled for 64 bit mode.
              Build: linux 

       Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                    Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                    Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                    Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                    Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
                    Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

       Run began: Tue Jan 29 11:11:51 2013

       Record Size 4096 KB
       File size set to 2097152 KB
       Excel chart generation enabled
       Include fsync in write timing
       Include close in write timing
       Command line used: iozone -i 0 -i 1 -r 4096 -s 2G -Recb /data/mysql/iozone/log.xls -t 2 -C
       Output is in Kbytes/sec
       Time Resolution = 0.000001 seconds.
       Processor cache size set to 1024 Kbytes.
       Processor cache line size set to 32 bytes.
       File stride size set to 17 * record size.
       Throughput test with 2 processes
       Each process writes a 2097152 Kbyte file in 4096 Kbyte records

       Children see throughput for  2 initial writers =  197313.41 KB/sec
       Parent sees throughput for  2 initial writers    =  174735.45 KB/sec
       Min throughput per process                 =   87504.14 KB/sec 
       Max throughput per process                =  109809.27 KB/sec
       Avg throughput per process                 =   98656.70 KB/sec
       Min xfer                                  = 1671168.00 KB
       Child[0] xfer count = 1671168.00 KB, Throughput =   87504.14 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput =  109809.27 KB/sec

       Children see throughput for  2 rewriters =  203248.12 KB/sec
       Parent sees throughput for  2 rewriters    =  203053.04 KB/sec
       Min throughput per process                 =  101543.17 KB/sec 
       Max throughput per process                =  101704.95 KB/sec
       Avg throughput per process                 =  101624.06 KB/sec
       Min xfer                                  = 2097152.00 KB
       Child[0] xfer count = 2097152.00 KB, Throughput =  101543.17 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput =  101704.95 KB/sec

       Children see throughput for  2 readers            = 2397158.38 KB/sec
       Parent sees throughput for  2 readers             = 2391863.63 KB/sec
       Min throughput per process                 = 1176860.12 KB/sec 
       Max throughput per process                = 1220298.25 KB/sec
       Avg throughput per process                 = 1198579.19 KB/sec
       Min xfer                                  = 2023424.00 KB
       Child[0] xfer count = 2023424.00 KB, Throughput = 1176860.12 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput = 1220298.25 KB/sec

       Children see throughput for 2 re-readers   = 2600163.88 KB/sec
       Parent sees throughput for 2 re-readers    = 2592894.05 KB/sec
       Min throughput per process                 = 1271696.50 KB/sec 
       Max throughput per process                = 1328467.38 KB/sec
       Avg throughput per process                 = 1300081.94 KB/sec
       Min xfer                                  = 2011136.00 KB
       Child[0] xfer count = 2011136.00 KB, Throughput = 1271696.50 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput = 1328467.38 KB/sec



"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4096 Kbytes "
"Output is in Kbytes/sec"

"  Initial write "  197313.41 

"        Rewrite "  203248.12 

"           Read " 2397158.38 

"        Re-read " 2600163.88

—————————————————————–

HP ProLiant DL380p Gen8

[root@DCB-SRV-0225 data1]# iozone -i 0 -i 1 -r 4096 -s 2G -Recb /data1/iozone/log.xls -t 2 -C |tee /data1/iozone/iozone.log
       Iozone: Performance Test of File I/O
               Version $Revision: 3.315 $
              Compiled for 64 bit mode.
              Build: linux 

       Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                    Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                    Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                    Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                    Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
                    Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

       Run began: Tue Jan 29 10:50:05 2013

       Record Size 4096 KB
       File size set to 2097152 KB
       Excel chart generation enabled
       Include fsync in write timing
       Include close in write timing
       Command line used: iozone -i 0 -i 1 -r 4096 -s 2G -Recb /data1/iozone/log.xls -t 2 -C
       Output is in Kbytes/sec
       Time Resolution = 0.000001 seconds.
       Processor cache size set to 1024 Kbytes.
       Processor cache line size set to 32 bytes.
       File stride size set to 17 * record size.
       Throughput test with 2 processes
       Each process writes a 2097152 Kbyte file in 4096 Kbyte records

       Children see throughput for  2 initial writers =  372381.80 KB/sec
       Parent sees throughput for  2 initial writers    =  371527.79 KB/sec
       Min throughput per process                 =  185643.42 KB/sec 
       Max throughput per process                =  186738.38 KB/sec
       Avg throughput per process                 =  186190.90 KB/sec
       Min xfer                                  = 2084864.00 KB
       Child[0] xfer count = 2084864.00 KB, Throughput =  185643.42 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput =  186738.38 KB/sec

       Children see throughput for  2 rewriters =  560498.03 KB/sec
       Parent sees throughput for  2 rewriters    =  560283.33 KB/sec
       Min throughput per process                 =  280219.81 KB/sec 
       Max throughput per process                =  280278.22 KB/sec
       Avg throughput per process                 =  280249.02 KB/sec
       Min xfer                                  = 2097152.00 KB
       Child[0] xfer count = 2097152.00 KB, Throughput =  280219.81 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput =  280278.22 KB/sec

       Children see throughput for  2 readers            = 7649346.50 KB/sec
       Parent sees throughput for  2 readers             = 7612895.95 KB/sec
       Min throughput per process                 = 3796191.50 KB/sec 
       Max throughput per process                = 3853155.00 KB/sec
       Avg throughput per process                 = 3824673.25 KB/sec
       Min xfer                                  = 2076672.00 KB
       Child[0] xfer count = 2076672.00 KB, Throughput = 3796191.50 KB/sec
       Child[1] xfer count = 2097152.00 KB, Throughput = 3853155.00 KB/sec

       Children see throughput for 2 re-readers   = 8173770.00 KB/sec
       Parent sees throughput for 2 re-readers    = 8132688.68 KB/sec
       Min throughput per process                 = 4063641.50 KB/sec 
       Max throughput per process                = 4110128.50 KB/sec
       Avg throughput per process                 = 4086885.00 KB/sec
       Min xfer                                  = 2084864.00 KB
       Child[0] xfer count = 2097152.00 KB, Throughput = 4110128.50 KB/sec
       Child[1] xfer count = 2084864.00 KB, Throughput = 4063641.50 KB/sec



"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4096 Kbytes "
"Output is in Kbytes/sec"

"  Initial write "  372381.80 

"        Rewrite "  560498.03 

"           Read " 7649346.50 

"        Re-read " 8173770.00 


iozone test complete.

所有指数 HP 都秒杀dell 了.

参考 ext3 and sync-binlog do not play well together。 看来在不同的OS版本上,mysql之于ext3的performance仍然是存在区别的。

优化mysql ibdata*

January 18, 2012 maintain, MYSQL No comments

由于此前的mysql salve服务器没有进行规划,一些参数都是使用的默认参数,导致innodb datafile 无限制增长,所以需要重新规划一下datafile空间的分配,以及其他参数的优化

version:

mysql>
mysql>
mysql> select version()
-> ;
+————+
| version() |
+————+
| 5.5.14-log |
+————+
1 row in set (0.02 sec)

mysql>

配置sysctl limit
[root@db-72 mysql]# cat /etc/security/limits.conf

# End of file

* soft nofile 131072
* hard nofile 131072

sysctl.conf 增加如下参数
[root@db-72 mysql]# cat /etc/sysctl.conf

skip-external-locking
key_buffer_size = 2G
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M

server-id= 2
read-only=1# 0 read/write 1 read only.
open_files_limit = 30000
max_connections=800
myisam_sort_buffer_size = 64M
bulk_insert_buffer_size = 32M
innodb_buffer_pool_size = 3G
innodb_additional_mem_pool_size = 120M
innodb_flush_log_at_trx_commit = 2
innodb_max_dirty_pages_pct = 80
innodb_lock_wait_timeout = 50
innodb_thread_concurrency = 0
innodb_data_file_path = ibdata1:19618M;ibdata2:8192M;ibdata3:10M:autoextend
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_log_buffer_size = 16M
innodb_open_files=1600

原来没有设置innodb_data_file_path 导致ibdata1膨胀到19G左右 需要对此进行限制:

[root@db-72 mysql]# ls -l |grep ib
-rwxrwxr– 1 mysql mysql 20612907008 Jan 18 12:54 ibdata1
-rwxrwxr– 1 mysql mysql 5242880 Jan 18 12:54 ib_logfile0
-rwxrwxr– 1 mysql mysql 5242880 Jan 18 12:46 ib_logfile1

首先move掉ib_logfile 对ibdata1 文件大小进行匹配 20612907008/1024/1024=19618M

innodb_data_file_path = ibdata1:19618M;ibdata2:8192M;ibdata3:10M:autoextend
innodb_log_file_size = 256M
innodb_log_files_in_group = 3

分配3个日志组,每个日志组大小256M

重启mysql 数据库

[root@db-72 mysql]# ls -l |grep ib
-rwxr-xr– 1 mysql mysql 20570963968 Jan 18 12:55 ibdata1
-rw-rw—- 1 mysql mysql 8589934592 Jan 18 11:38 ibdata2
-rw-rw—- 1 mysql mysql 10485760 Jan 18 11:38 ibdata3
-rw-rw—- 1 mysql mysql 268435456 Jan 18 12:55 ib_logfile0
-rwxr-xr– 1 mysql mysql 5242880 Jan 18 11:09 ib_logfile0.bak
-rw-rw—- 1 mysql mysql 268435456 Jan 18 11:12 ib_logfile1
-rwxr-xr– 1 mysql mysql 5242880 Jan 18 11:09 ib_logfile1.bak
-rw-rw—- 1 mysql mysql 268435456 Jan 18 11:12 ib_logfile2
[root@db-72 mysql]#

mysql> show slave status \g
+———————————-+————-+————-+————-+—————+——————+———————+————————+—————+———————–+——————+——————-+—————–+———————+——————–+————————+————————-+—————————–+————+————+————–+———————+—————–+—————–+—————-+—————+——————–+——————–+——————–+—————–+——————-+—————-+———————–+——————————-+—————+—————+—————-+—————-+—————————–+——————+
| Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | Master_SSL_Verify_Server_Cert | Last_IO_Errno | Last_IO_Error | Last_SQL_Errno | Last_SQL_Error | Replicate_Ignore_Server_Ids | Master_Server_Id |
+———————————-+————-+————-+————-+—————+——————+———————+————————+—————+———————–+——————+——————-+—————–+———————+——————–+————————+————————-+—————————–+————+————+————–+———————+—————–+—————–+—————-+—————+——————–+——————–+——————–+—————–+——————-+—————-+———————–+——————————-+—————+—————+—————-+—————-+—————————–+——————+
| Waiting for master to send event | 10.0.0.71 | rep | 3306 | 60 | db-71-bin.001190 | 313977587 | db-72-relay-bin.003550 | 113788261 | db-71-bin.001190 | Yes | Yes | | | | | | | 0 | | 0 | 313977587 | 113788417 | None | | 0 | No | | | | | | 0 | No | 0 | | 0 | | | 1 |
+———————————-+————-+————-+————-+—————+——————+———————+————————+—————+———————–+——————+——————-+—————–+———————+——————–+————————+————————-+—————————–+————+————+————–+———————+—————–+—————–+—————-+—————+——————–+——————–+——————–+—————–+——————-+—————-+———————–+——————————-+—————+—————+—————-+—————-+—————————–+——————+

重新start mysql 后 slave 迅速追上了 master