通过JDBC将30兆字节的文件上传到longblob列会导致mysql崩溃并出现以下错误。我在谷歌上找不到任何东西。任何帮助将不胜感激
131115 15:34:13 InnoDB:文件sync0rw.c第572行中的线程1669131072中的断言失败
以下是我通过JDBC进行此操作的方法:
public static void setObjectInstBlobOpenCon(String objInstId, InputStream is, long size, String fieldNm, Connection conn) throws SQLException {
try {
String typeTable = getObjectInstanceTypeTbl(objInstId);
PreparedStatement cStmt = conn.prepareStatement("update " + typeTable + " " +
"set " + fieldNm + " = ? " +
"where object_ky = ?"
);
if (size == 0) {
cStmt.setBinaryStream(1, is);
} else {
cStmt.setBinaryStream(1, is, size);
}
cStmt.setString(2, objInstId);
cStmt.execute();
//conn.close();
} catch(SQLException e) {
System.err.println("Mysql Statement Error: call setObjectInstBlob(?) for object " + objInstId );
e.printStackTrace();
throw e;
}
}
完整错误是:
131115 15:34:13 InnoDB: Assertion failure in thread 1669131072 in file sync0rw.c line 572
InnoDB: Failing assertion: !lock->recursive
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
15:34:13 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=29
max_threads=151
thread_count=25
connection_count=25
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346075 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0xffffffffbac68c40
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 637ce2cc thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x33)[0xb726d9b3]
/usr/sbin/mysqld(handle_fatal_signal+0x484)[0xb7119f14]
[0xb6dcd400]
/usr/sbin/mysqld(+0x58613f)[0xb737613f]
/usr/sbin/mysqld(+0x5c22a0)[0xb73b22a0]
/usr/sbin/mysqld(+0x5c58a4)[0xb73b58a4]
/usr/sbin/mysqld(+0x5771f6)[0xb73671f6]
/usr/sbin/mysqld(+0x531fc8)[0xb7321fc8]
/usr/sbin/mysqld(+0x53590a)[0xb732590a]
/usr/sbin/mysqld(+0x536529)[0xb7326529]
/usr/sbin/mysqld(+0x520bfd)[0xb7310bfd]
/usr/sbin/mysqld(+0x50d546)[0xb72fd546]
/usr/sbin/mysqld(_ZN7handler13ha_update_rowEPKhPh+0x7a)[0xb712290a]
/usr/sbin/mysqld(_Z12mysql_updateP3THDP10TABLE_LISTR4ListI4ItemES6_PS4_jP8st_ordery15enum_duplicatesbPySB_+0x154d)[0xb706952d]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0xf42)[0xb6fdfbf2]
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0xfc)[0xb6fe7e1c]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x2362)[0xb6fea252]
/usr/sbin/mysqld(_Z10do_commandP3THD+0xd3)[0xb6feac93]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x1eb)[0xb709eb9b]
/usr/sbin/mysqld(handle_one_connection+0x50)[0xb709ec00]
/lib/i386-linux-gnu/tls/i686/nosegneg/libpthread.so.0(+0x6d4c)[0xb6d49d4c]
/lib/i386-linux-gnu/tls/i686/nosegneg/libc.so.6(clone+0x5e)[0xb6b58bae]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (5e24b018): is an invalid pointer
Connection ID (thread ID): 501
Status: NOT_KILLED
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
编辑:2013-11-16
虽然这不是一个解决方案,但我的解决方法是将要托管这些大型二进制文件的表转换为myIsam而不是InnoDB。
该表是一个转码队列表,用于保存大型视频文件及其转码的等效文件。
这是我的架构中唯一一个托管这些大量文件的表,所以通过首先删除我对它的一个外键约束来将它转换为myISAM并不是什么大问题。
到目前为止,我对此解决方案感到满意。我的架构的其余部分仍然受益于innodb提供的善良。