巨大的行锁后MySQL崩溃

时间:2016-11-15 20:18:48

标签: mysql crash locking innodb

我在Windows Server 2008 R2上使用MySQL 5.7.14 x64

有时(当天随机时间)mysql与此堆栈跟踪崩溃

11:44:40 UTC - mysqld got exception 0x80000003 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.

key_buffer_size=8388608
read_buffer_size=65536
max_used_connections=369
max_threads=2800
thread_count=263
connection_count=263
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 3195125 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x2ee2b72b0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
13fe1bad2    mysqld.exe!my_sigabrt_handler()[my_thr_init.c:449]
1401c7979    mysqld.exe!raise()[winsig.c:587]
1401c6870    mysqld.exe!abort()[abort.c:82]
13ff1dd38    mysqld.exe!ut_dbg_assertion_failed()[ut0dbg.cc:67]
13ff1df51    mysqld.exe!ib::fatal::~fatal()[ut0ut.cc:916]
13ff0e008    mysqld.exe!buf_LRU_check_size_of_non_data_objects()[buf0lru.cc:1219]
13ff0f4ab    mysqld.exe!buf_LRU_get_free_block()[buf0lru.cc:1303]
1400305cb    mysqld.exe!buf_block_alloc()[buf0buf.cc:557]
13ff3767e    mysqld.exe!mem_heap_create_block_func()[mem0mem.cc:319]
13ff37499    mysqld.exe!mem_heap_add_block()[mem0mem.cc:408]
13ffd87f4    mysqld.exe!RecLock::lock_alloc()[lock0lock.cc:1441]
13ffd795c    mysqld.exe!RecLock::create()[lock0lock.cc:1534]
13ffd73a6    mysqld.exe!RecLock::add_to_waitq()[lock0lock.cc:1735]
13ffdcaaa    mysqld.exe!lock_rec_lock_slow()[lock0lock.cc:2007]
13ffdc6ce    mysqld.exe!lock_rec_lock()[lock0lock.cc:2081]
13ffd8cc7    mysqld.exe!lock_clust_rec_read_check_and_lock()[lock0lock.cc:6307]
140076fe3    mysqld.exe!row_ins_set_shared_rec_lock()[row0ins.cc:1502]
140072927    mysqld.exe!row_ins_check_foreign_constraint()[row0ins.cc:1739]
140072de8    mysqld.exe!row_ins_check_foreign_constraints()[row0ins.cc:1932]
140075d69    mysqld.exe!row_ins_sec_index_entry()[row0ins.cc:3356]
1400758a6    mysqld.exe!row_ins_index_entry_step()[row0ins.cc:3583]
140071b30    mysqld.exe!row_ins()[row0ins.cc:3721]
14007755a    mysqld.exe!row_ins_step()[row0ins.cc:3907]
13ffaad50    mysqld.exe!row_insert_for_mysql_using_ins_graph()[row0mysql.cc:1735]
13fe7a7d3    mysqld.exe!ha_innobase::write_row()[ha_innodb.cc:7489]
13f6e5531    mysqld.exe!handler::ha_write_row()[handler.cc:7891]
13f8e54de    mysqld.exe!write_record()[sql_insert.cc:1860]
13f8e916a    mysqld.exe!read_sep_field()[sql_load.cc:1222]
13f8e7af4    mysqld.exe!mysql_load()[sql_load.cc:563]
13f716e86    mysqld.exe!mysql_execute_command()[sql_parse.cc:3649]
13f7194b3    mysqld.exe!mysql_parse()[sql_parse.cc:5565]
13f71267d    mysqld.exe!dispatch_command()[sql_parse.cc:1430]
13f71368a    mysqld.exe!do_command()[sql_parse.cc:997]
13f6d82bc    mysqld.exe!handle_connection()[connection_handler_per_thread.cc:300]
140105122    mysqld.exe!pfs_spawn_thread()[pfs.cc:2191]
13fe1b93b    mysqld.exe!win_thread_start()[my_thread.c:38]
1401c73ef    mysqld.exe!_callthreadstartex()[threadex.c:376]
1401c763a    mysqld.exe!_threadstartex()[threadex.c:354]
772859bd    kernel32.dll!BaseThreadInitThunk()
773ba2e1    ntdll.dll!RtlUserThreadStart() 

此时仅激活2笔交易

---TRANSACTION 1111758443, ACTIVE 565 sec
mysql tables in use 7, locked 7
7527 lock struct(s), heap size 876752, 721803 row lock(s), undo log entries 379321
MySQL thread id 166068, OS thread handle 1508, query id 112695582 localhost converter Waiting for table level lock
delete from pl

using

import_k2b_product_links ipl inner join k2b_products pSource on ipl.src_product = pSource.article and pSource.account_id = 22

inner join k2b_products pDest on ipl.dst_product = pDest.article and pDest.account_id = 22

inner join k2b_product_links pl on pl.src_product_id = pSource.id and pl.dst_product = pDest.id

where ipl.action = 1
---TRANSACTION 1111759716, ACTIVE 496 sec inserting, thread declared inside InnoDB 1
mysql tables in use 4, locked 4
7 lock struct(s), heap size 1304535248, 102060778 row lock(s), undo log entries 1
MySQL thread id 19436, OS thread handle 11664, query id 112301161 localhost exchange_central
LOAD DATA INFILE 'd:/kdm/temp/webCentral/ufrd1uwx.v2r'

    INTO TABLE k2b_orders

    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'

  LINES TERMINATED BY '\n'

(id_status, dt, account_id, sms_sended, params, update_ts, exported, id_editor, dt_offset, device_id, gen, changer_device_id, total, creator_device_id, id, dt_server, device_category_id, original_params, order_num, sended, editor_comment, admin_comment)

我不明白为什么交易1111758443等待表级锁?

为什么事务1111759716锁定102060778行,而它只从外部文件加载一个,并在撤消日志条目1中显示?

我必须做出哪些调查才能知道这种巨大的锁定和崩溃。

谢谢!

1 个答案:

答案 0 :(得分:0)

有两件事让我觉得崩溃不是“真正的”问题。

日志中的两个查询都显示“巨大”时间,例如ACTIVE 565 sec

这些都很大:

max_used_connections=369
max_threads=2800
thread_count=263
connection_count=263

当同时有数百个线程活动时,InnoDB会自行绊倒。吞吐量停滞,延迟时间到了顶部。

一种方法是避免这么多联系。这有时最好在客户端完成。什么是客户?例如,Apache有MaxClients。十几个Apaches,每个都有MaxClients = 50,试图打开600个连接。可能一个Apache无法一次有效处理50个线程。降低这个数字。

是否有VIEWs欺骗我们?

另一件事是追求table level lock。让我们看一下SHOW CREATE TABLE所涉及的表格。检查适当的索引。

import_k2b_product_links: INDEX(action, ...)
k2b_products: INDEX(account_id, src_product)   -- in either order
k2b_products: INDEX(account_id, dest_product)  -- in either order
k2b_product_links:  INDEX(src_product_id, dest_product_id) -- or PK, see below

k2b_product_links是多个:很多映射表?如果是这样,请按照Here所述删除id auto_increment

索引建议如果有用,可以加快DELETE,从而减​​少可能的争用。