在过去的4天里,MySQL一直在崩溃运行脚本,比如一天一天
这是错误日志
key_buffer_size=134217728
read_buffer_size=1048576
max_used_connections=39
max_threads=100
threads_connected=34
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 336508 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0x92025f38
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x95dce36c thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2d) [0x6b65ad]
/usr/sbin/mysqld(handle_segfault+0x494) [0x3823d4]
[0x110400]
/usr/sbin/mysqld(MYSQLparse(void*)+0x6aa) [0x3b42da]
/usr/sbin/mysqld(mysql_parse(THD*, char const*, unsigned int, char const**)+0x23e) [0x39ce6e]
/usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xf35) [0x39df25]
/usr/sbin/mysqld(do_command(THD*)+0xf3) [0x39f0e3]
/usr/sbin/mysqld(handle_one_connection+0x2a0) [0x38dbd0]
/lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0x93d96e]
/lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xd78a4e]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x86982ef4 is an invalid pointer
thd->thread_id=2906
thd->killed=NOT_KILLED
该框运行在2GB RAM上,根据我的计算,它不应该有最大内存的问题。我特意将内存要求降至最低但仍然出现错误。
mysql> show variables like "sort_buffer%";
+------------------+---------+
| Variable_name | Value |
+------------------+---------+
| sort_buffer_size | 1048576 |
+------------------+---------+
今天在这个SQL查询
上崩溃了ALTER TABLE FieldDefaultValue MODIFY value_field varchar(2000) CHARACTER SET utf8 collate utf8_bin;
任何人都有类似的经历吗?
编辑:
有问题的表实际上不包含太多数据,数据库有更大的表:
mysql> desc fielddefaultvalue;
+----------------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+---------------+------+-----+---------+----------------+
| fielddefaultvalue_Id | bigint(20) | NO | PRI | NULL | auto_increment |
| version | bigint(20) | NO | | NULL | |
| value_field | varchar(2000) | YES | MUL | NULL | |
| optimistic_version | bigint(20) | NO | | NULL | |
| property_fk | bigint(20) | YES | MUL | NULL | |
| esg_fk | bigint(20) | YES | MUL | NULL | |
+----------------------+---------------+------+-----+---------+----------------+
6 rows in set (0.02 sec)
mysql> select count(*) from fielddefaultvalue;
+----------+
| count(*) |
+----------+
| 690 |
+----------+
1 row in set (0.00 sec)
多次插入(400-500)的小数据也失败了,但并不是所有时间,同一个脚本可以正常运行一次或崩溃
编辑2:崩溃恢复后,错误日志也会报告:
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
的my.cnf
lower_case_table_names = 1
key_buffer = 16M
key_buffer_size = 128M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
max_connections = 100
table_cache = 512
thread_concurrency = 4
sort_buffer_size = 1M
read_buffer_size = 1M
table_open_cache = 512
read_rnd_buffer_size = 8M
innodb_file_per_table = 1
open_files_limit = 65536
default_character_set=utf8
query_cache_limit = 1M
query_cache_size = 64M
expire_logs_days = 10
max_binlog_size = 250M
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
编辑:5小时后
它只是在同一个“常规”脚本上再次崩溃,它是日期列上的25.000行更新脚本。
相同的错误消息:
InnoDB: Log scan progressed past the checkpoint lsn 186 4056481576
110620 17:30:52 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Read
有趣的是,我今天运行了这个脚本并没有失败,但它现在已经完成了。
答案 0 :(得分:1)
最可能的解释是地址空间不足;请发布您的整个my.cnf。
在生产中运行32位操作系统并不是一个好主意。
但是,你应该做的是:
如果您可以在支持的操作系统上使用标准 Oracle构建重现该错误,那么您的内存/地址空间不会用完,并且没有硬件故障,然后你可以将错误提交给Oracle。
最好的想法是以最少的数据/表格大小重现测试用例。
答案 1 :(得分:1)
听起来你的innodb_log_file_size不够大 - 在my.cnf中尝试256 MB: innodb_log_file_size = 256M
你需要干净地关闭它,删除旧的日志文件,然后重启 - mysql将重新创建新的日志文件。
答案 2 :(得分:0)
奇怪......我不知道 ALTER TABLE 在MySQL上的实际优化程度。也许它消耗了很多力量。如果表包含大量数据,请尝试将所有数据移动到临时表中并清空主数据。然后执行alter table并将数据推回。如果它必须在每一行上工作,那么你可以像这样拆分工作并一次做几条记录。