我在使用mongodb数据实际平衡我的分片时遇到了问题。我设置了分片,所有东西看起来都在sh.status()中找到了......
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 3,
"minCompatibleVersion" : 3,
"currentVersion" : 4,
"clusterId" : ObjectId("524490036b0d452cd76aea51")
}
shards:
{ "_id" : "shard0000", "host" : "mongo1:27018" }
{ "_id" : "shard0001", "host" : "mongo2:27018" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0000" }
{ "_id" : "activeTvs", "partitioned" : true, "primary" : "shard0000" }
activeTvs.duid
shard key: { "duid" : 1 }
chunks:
shard0000 11
{ "duid" : { "$minKey" : 1 } } -->> { "duid" : "2DCAMPY4ZUR4I" } on : shard0000 Timestamp(1, 1)
{ "duid" : "2DCAMPY4ZUR4I" } -->> { "duid" : "BDCN4UQJ3V3OA" } on : shard0000 Timestamp(1, 13)
{ "duid" : "BDCN4UQJ3V3OA" } -->> { "duid" : "EXCN4UQJ3TLTC" } on : shard0000 Timestamp(1, 15)
{ "duid" : "EXCN4UQJ3TLTC" } -->> { "duid" : "JHCN4UQJ3BSH2" } on : shard0000 Timestamp(1, 16)
{ "duid" : "JHCN4UQJ3BSH2" } -->> { "duid" : "KLCN4UQJ35RCO" } on : shard0000 Timestamp(1, 11)
{ "duid" : "KLCN4UQJ35RCO" } -->> { "duid" : "MTCN4UQJ3SH2O" } on : shard0000 Timestamp(1, 12)
{ "duid" : "MTCN4UQJ3SH2O" } -->> { "duid" : "P7CN4UQJ3TLFU" } on : shard0000 Timestamp(1, 17)
{ "duid" : "P7CN4UQJ3TLFU" } -->> { "duid" : "RLCN4UQJ3QELM" } on : shard0000 Timestamp(1, 18)
{ "duid" : "RLCN4UQJ3QELM" } -->> { "duid" : "U7CN4UQJ3SQ6G" } on : shard0000 Timestamp(1, 19)
{ "duid" : "U7CN4UQJ3SQ6G" } -->> { "duid" : "ZPCN4UQJE6DWK" } on : shard0000 Timestamp(1, 20)
{ "duid" : "ZPCN4UQJE6DWK" } -->> { "duid" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 4)
问题是所有数据仅在1个分片上设置。我试着用movechunk设置 sh.moveChunk(“activeTvs.duid”,{“duid”:“JHCN4UQJ3BSH2”},“shard0001”)
这是输出......
mongos> sh.moveChunk("activeTvs.duid", { "duid": "JHCN4UQJ3BSH2" }, "shard0001")
{
"cause" : {
"cause" : {
"note" : "from execCommand",
"ok" : 0,
"errmsg" : "not master"
},
"ok" : 0,
"errmsg" : "moveChunk failed to engage TO-shard in the data transfer: not master"
},
"ok" : 0,
"errmsg" : "move failed"
}
我不知道如何解决块迁移的“非主”错误。 我花了很多时间在这上面,并且确实需要在分片之间分配数据以提高写入性能。非常感谢所有帮助,谢谢
哦,我也查看了数据库锁定以找到更多信息,但这就是我所看到的
db.locks.find().pretty()
{
"_id" : "configUpgrade",
"process" : "ipaddress:27017:1380225027:1804289383",
"state" : 0,
"ts" : ObjectId("524490036b0d452cd76aea4f"),
"when" : ISODate("2013-09-26T19:50:27.567Z"),
"who" : "ipaddress:27017:1380225027:1804289383:mongosMain:846930886",
"why" : "upgrading config database to new format v4"
}
{
"_id" : "balancer",
"process" : "ipaddress:27017:1380225024:1804289383",
"state" : 0,
"ts" : ObjectId("52449d596787fa8abc8a0311"),
"when" : ISODate("2013-09-26T20:47:21.880Z"),
"who" : "ip:1380225024:1804289383:Balancer:846930886",
"why" : "doing balance round"
}
{
"_id" : "activeTvs.duid",
"process" : "ipaddress:27018:1380225875:1148973287",
"state" : 0,
"ts" : ObjectId("52449d5e46aced87c407d472"),
"when" : ISODate("2013-09-26T20:47:26.608Z"),
"who" : "ip:1380225875:1148973287:conn4:1169675271",
"why" : "migrate-{ duid: MinKey }"
}