Mongodb关闭

时间:2011-05-11 05:43:02

标签: mongodb

关于mongodb关闭我有这个问题。它实际上是抛出一个分段错误,然后关闭。错误日志如下。任何人都可以建议导致错误的原因。

Wed May 11 12:50:53 db version v1.6.5, pdfile version 4.5
Wed May 11 12:50:53 git version: 0eb017e9b2828155a67c5612183337b89e12e291
Wed May 11 12:50:53 sys info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_37
Wed May 11 12:50:53 [initandlisten] waiting for connections on port 27017
Wed May 11 12:50:53 [websvr] web admin interface listening on port 28017
Wed May 11 12:51:03 [initandlisten] connection accepted from 127.0.0.1:36745 #1
Wed May 11 12:51:03 [conn1] end connection 127.0.0.1:36745
Wed May 11 12:51:05 [initandlisten] connection accepted from 127.0.0.1:36747 #2
Wed May 11 12:51:05 [conn2] end connection 127.0.0.1:36747
Wed May 11 12:51:05 [initandlisten] connection accepted from 127.0.0.1:36748 #3
Wed May 11 12:51:05 [conn3] error: have index [twitter.home_timeline.$aves_user_id_1] but no NamespaceDetails
Wed May 11 12:51:05 [conn3] end connection 127.0.0.1:36748
Wed May 11 12:51:09 [initandlisten] connection accepted from 127.0.0.1:36752 #4
Wed May 11 12:51:09 [conn4] end connection 127.0.0.1:36752
Wed May 11 12:51:10 [initandlisten] connection accepted from 127.0.0.1:36753 #5
Wed May 11 12:51:10 [conn5] dropDatabase twitter
Wed May 11 12:51:10 [conn5] query twitter.$cmd ntoreturn:1 command: { dropDatabase: 1 } reslen:74 113ms
Wed May 11 12:51:10 [conn5] end connection 127.0.0.1:36753
Wed May 11 12:51:10 [initandlisten] connection accepted from 127.0.0.1:36754 #6
Wed May 11 12:51:11 [conn6] end connection 127.0.0.1:36754
Wed May 11 12:51:17 [initandlisten] connection accepted from 127.0.0.1:36755 #7
Wed May 11 12:51:17 allocating new datafile /home/lakesh/mongodb/data/twitter.ns, filling with zeroes...
Wed May 11 12:51:17 done allocating datafile /home/lakesh/mongodb/data/twitter.ns, size: 16MB,  took 0 secs
Wed May 11 12:51:17 allocating new datafile /home/lakesh/mongodb/data/twitter.0, filling with zeroes...
Wed May 11 12:51:17 done allocating datafile /home/lakesh/mongodb/data/twitter.0, size: 64MB,  took 0 secs
Wed May 11 12:51:17 allocating new datafile /home/lakesh/mongodb/data/twitter.1, filling with zeroes...
Wed May 11 12:51:17 done allocating datafile /home/lakesh/mongodb/data/twitter.1, size: 128MB,  took 0 secs
Wed May 11 12:51:17 [conn7] building new index on { _id: 1 } for twitter.home_timeline
Wed May 11 12:51:17 [conn7] done for 0 records 0secs
Wed May 11 12:51:20 allocating new datafile /home/lakesh/mongodb/data/twitter.2, filling with zeroes...
Wed May 11 12:51:20 done allocating datafile /home/lakesh/mongodb/data/twitter.2, size: 256MB,  took 0 secs
Wed May 11 12:51:21 [conn7] building new index on { _id: 1 } for twitter.direct_messages
Wed May 11 12:51:21 [conn7] done for 0 records 0secs
Wed May 11 12:51:21 [conn7] info: creating collection twitter.direct_messages on add index
building new index on { _id: 1 } for twitter.hash_tags
Wed May 11 12:51:21 [conn7] done for 0 records 0secs
Wed May 11 12:51:21 [conn7] info: creating collection twitter.hash_tags on add index
building new index on { _id: 1 } for twitter.mentions
Wed May 11 12:51:21 [conn7] done for 0 records 0secs
Wed May 11 12:51:21 [conn7] info: creating collection twitter.mentions on add index
building new index on { _id: 1 } for twitter.urls
Wed May 11 12:51:21 [conn7] done for 0 records 0secs
Wed May 11 12:51:21 [conn7] info: creating collection twitter.urls on add index
building new index on { aves_user_id: 1.0 } for twitter.home_timeline
Wed May 11 12:51:22 Got signal: 11 (Segmentation fault).

Wed May 11 12:51:22 Backtrace:
0x84a7552 0xb7730400 0x8102d3e 0x8201dfc 0x820387e 0x83dbf63 0x83874ec 0x8388efd 0x838e3f8 0x839025a 0x8367ad2 0x836998b 0x84a5793 0x81cd468 0x84bf1bd 0xb75d6cc9 0xb75436ae 
 ./mongod(_ZN5mongo10abruptQuitEi+0x3c2) [0x84a7552]
 [0xb7730400]
 ./mongod(_ZNK5mongo7BSONObj21getFieldDottedOrArrayERPKc+0xae) [0x8102d3e]
 ./mongod(_ZNK5mongo9IndexSpec8_getKeysESt6vectorIPKcSaIS3_EES1_INS_11BSONElementESaIS6_EERKNS_7BSONObjERSt3setIS9_NS_22BSONObjCmpDefaultOrderESaIS9_EE+0x8c) [0x8201dfc]
 ./mongod(_ZNK5mongo9IndexSpec7getKeysERKNS_7BSONObjERSt3setIS1_NS_22BSONObjCmpDefaultOrderESaIS1_EE+0x24e) [0x820387e]
 ./mongod(_ZNK5mongo12IndexDetails17getKeysFromObjectERKNS_7BSONObjERSt3setIS1_NS_22BSONObjCmpDefaultOrderESaIS1_EE+0x33) [0x83dbf63]
 ./mongod(_ZN5mongo14fastBuildIndexEPKcPNS_16NamespaceDetailsERNS_12IndexDetailsEi+0x69c) [0x83874ec]
 ./mongod() [0x8388efd]
 ./mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibRKNS_11BSONElementEb+0xbc8) [0x838e3f8]
 ./mongod(_ZN5mongo11DataFileMgr16insertWithObjModEPKcRNS_7BSONObjEb+0x6a) [0x839025a]
 ./mongod(_ZN5mongo14receivedInsertERNS_7MessageERNS_5CurOpE+0x3a2) [0x8367ad2]
 ./mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_8SockAddrE+0x19bb) [0x836998b]
 ./mongod(_ZN5mongo10connThreadEPNS_13MessagingPortE+0x313) [0x84a5793]
 ./mongod(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvPFvPN5mongo13MessagingPortEENS2_5list1INS2_5valueIS6_EEEEEEE3runEv+0x18) [0x81cd468]
 ./mongod(thread_proxy+0x7d) [0x84bf1bd]
 /lib/libpthread.so.0(+0x5cc9) [0xb75d6cc9]
 /lib/libc.so.6(clone+0x5e) [0xb75436ae]

Wed May 11 12:51:22 dbexit: 

Wed May 11 12:51:22 [conn7] shutdown: going to close listening sockets...
Wed May 11 12:51:22 [conn7] closing listening socket: 5
Wed May 11 12:51:22 [conn7] closing listening socket: 6
Wed May 11 12:51:22 [conn7] closing listening socket: 7
Wed May 11 12:51:22 [conn7] closing listening socket: 8
Wed May 11 12:51:22 [conn7] shutdown: going to flush oplog...
Wed May 11 12:51:22 [conn7] shutdown: going to close sockets...
Wed May 11 12:51:22 [conn7] shutdown: waiting for fs preallocator...
Wed May 11 12:51:22 [conn7] shutdown: closing all files...
Wed May 11 12:51:22     closeAllFiles() finished

Wed May 11 12:51:22 [conn7] shutdown: removing fs lock...
Wed May 11 12:51:22 dbexit: really exiting now
Wed May 11 12:51:22 ERROR: Client::~Client _context should be null but is not; client:conn

1 个答案:

答案 0 :(得分:0)

以下是我能想到的两个选项:

  1. 将本地服务器作为slave启动到远程主服务器。将所有数据复制到本地后,将其关闭并作为常规(主)重新启动。

  2. 启动本地服务器。使用命令行客户端中的db.copyDatabase()db.cloneDatabase() API将该数据库从远程服务器复制到本地服务器。

  3. 请试试这些 - 我很肯定你会看到一些进步。