MongoDB不会在分片群集中将块移动到新分片

时间:2017-12-02 07:12:20

标签: mongodb sharding chunks

我正在为存储数据运行mongodb。 MongoDB集群有3个分片,每个分片有3个服务器副本集,2个mongos和3个配置服务器。每台服务器都有1TB的存储空间。现在,3个分片中的所有2个都具有大约90%容量的数据。当我添加一个新的分片时,MongoDB不会将任何块从olds shard移动到新的分片。我检查mongos分片状态。它向我展示了这一点。 MongoDB baclancer正在运行。

mongos> db.printShardingStatus()
--- Sharding Status --- 
  sharding version: {
	"_id" : 1,
	"minCompatibleVersion" : 5,
	"currentVersion" : 6,
	"clusterId" : ObjectId("59c0ef31619ac70cb8ac5f5c")
}
  shards:
	{  "_id" : "rs0",  "host" : "rs0/10.5.36.88:27017,10.5.36.92:27017,10.5.36.93:27017",  "state" : 1,  "maxSize" : 990000 }
	{  "_id" : "rs1",  "host" : "rs1/10.5.36.101:27017,10.5.36.103:27017,10.5.36.97:27017",  "state" : 1,  "maxSize" : 990000 }
	{  "_id" : "rs2",  "host" : "rs2/10.5.36.100:27017,10.5.36.117:27017,10.5.36.126:27017",  "state" : 1,  "maxSize" : 990000 }
	{  "_id" : "rs3",  "host" : "rs3/10.5.36.152:27017,10.5.36.156:27017,10.5.36.164:27017",  "state" : 1,  "maxSize" : 990000 }
  active mongoses:
	"3.4.9" : 1
 autosplit:
	Currently enabled: yes
  balancer:
	Currently enabled:  yes
	Currently running:  yes
		Balancer lock taken at Wed Sep 20 2017 09:21:43 GMT+0700 by ConfigServer:Balancer
	Collections with active migrations: 
		fbgroups.comments started at Wed Nov 22 2017 22:36:15 GMT+0700
	Failed balancer rounds in last 5 attempts:  0
	Migration Results for the last 24 hours: 
		No recent migrations
  databases:
	{  "_id" : "fbpages",  "primary" : "rs0",  "partitioned" : true }
		fbpages.comments
			shard key: { "CommentFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	6263
				rs1	6652
				rs2	6175
			too many chunks to print, use verbose if you want to force print
		fbpages.links
			shard key: { "PageFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	23
				rs1	23
				rs2	23
			too many chunks to print, use verbose if you want to force print
		fbpages.posts
			shard key: { "PostFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	11931
				rs1	11847
				rs2	5043
			too many chunks to print, use verbose if you want to force print
	{  "_id" : "fbgroups",  "primary" : "rs0",  "partitioned" : true }
		fbgroups.comments
			shard key: { "CommentFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	6451
				rs1	6451
				rs2	4742
			too many chunks to print, use verbose if you want to force print
		fbgroups.links
			shard key: { "GroupId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	3
				rs1	3
				rs2	3
			{ "GroupId" : { "$minKey" : 1 } } -->> { "GroupId" : "1391082767860588" } on : rs2 Timestamp(7, 0) 
			{ "GroupId" : "1391082767860588" } -->> { "GroupId" : "1564129037230139" } on : rs0 Timestamp(7, 1) 
			{ "GroupId" : "1564129037230139" } -->> { "GroupId" : "172020656162023" } on : rs0 Timestamp(4, 0) 
			{ "GroupId" : "172020656162023" } -->> { "GroupId" : "244621675585655" } on : rs0 Timestamp(5, 0) 
			{ "GroupId" : "244621675585655" } -->> { "GroupId" : "375231932588613" } on : rs2 Timestamp(6, 0) 
			{ "GroupId" : "375231932588613" } -->> { "GroupId" : "506856652708047" } on : rs2 Timestamp(8, 0) 
			{ "GroupId" : "506856652708047" } -->> { "GroupId" : "67046218160" } on : rs1 Timestamp(8, 1) 
			{ "GroupId" : "67046218160" } -->> { "GroupId" : "878610618830881" } on : rs1 Timestamp(1, 7) 
			{ "GroupId" : "878610618830881" } -->> { "GroupId" : { "$maxKey" : 1 } } on : rs1 Timestamp(1, 8) 
		fbgroups.postdata
			shard key: { "_id" : 1 }
			unique: false
			balancing: true
			chunks:
				rs0	91
				rs1	482
				rs2	91
			too many chunks to print, use verbose if you want to force print
		fbgroups.posts
			shard key: { "PostFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	26015
				rs1	26092
				rs2	6526
			too many chunks to print, use verbose if you want to force print
	{  "_id" : "test",  "primary" : "rs1",  "partitioned" : true }
	{  "_id" : "intership",  "primary" : "rs1",  "partitioned" : false }
	{  "_id" : "fbhashtags",  "primary" : "rs2",  "partitioned" : true }
		fbhashtags.postdata
			shard key: { "_id" : 1 }
			unique: false
			balancing: true
			chunks:
				rs0	2
				rs1	2
				rs2	2
			{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("58f122ff7fe5fb4520b4185a") } on : rs0 Timestamp(2, 0) 
			{ "_id" : ObjectId("58f122ff7fe5fb4520b4185a") } -->> { "_id" : ObjectId("58fac0537fe5fb051d0749de") } on : rs1 Timestamp(3, 0) 
			{ "_id" : ObjectId("58fac0537fe5fb051d0749de") } -->> { "_id" : ObjectId("5906119e7fe5fb2c7d9d41e9") } on : rs0 Timestamp(4, 0) 
			{ "_id" : ObjectId("5906119e7fe5fb2c7d9d41e9") } -->> { "_id" : ObjectId("591012257fe5fb70dc9e49bf") } on : rs1 Timestamp(5, 0) 
			{ "_id" : ObjectId("591012257fe5fb70dc9e49bf") } -->> { "_id" : ObjectId("5918b5d77fe5fb2feb06338a") } on : rs2 Timestamp(5, 1) 
			{ "_id" : ObjectId("5918b5d77fe5fb2feb06338a") } -->> { "_id" : { "$maxKey" : 1 } } on : rs2 Timestamp(1, 5) 
		fbhashtags.posts
			shard key: { "_id" : 1 }
			unique: false
			balancing: true
			chunks:
				rs2	1
			{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : rs2 Timestamp(1, 0) 
	{  "_id" : "fbprofiles",  "primary" : "rs2",  "partitioned" : true }
		fbprofiles.fbcomments
			shard key: { "commentFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	18
				rs1	18
				rs2	19
			too many chunks to print, use verbose if you want to force print
		fbprofiles.fbposts
			shard key: { "postFbId" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	7
				rs1	7
				rs2	3144
			too many chunks to print, use verbose if you want to force print
		fbprofiles.fbprofiles
			shard key: { "baseUrl" : 1 }
			unique: true
			balancing: true
			chunks:
				rs0	2
				rs1	2
				rs2	141
			too many chunks to print, use verbose if you want to force print
	{  "_id" : "testnewfb",  "primary" : "rs2",  "partitioned" : false }
	{  "_id" : "news_images",  "primary" : "rs2",  "partitioned" : false }
	{  "_id" : "social_index",  "primary" : "rs2",  "partitioned" : false }
	{  "_id" : "twitter",  "primary" : "rs2",  "partitioned" : true }
	{  "_id" : "techmeme",  "primary" : "rs2",  "partitioned" : false }

为什么MongoDB不会将数据移动到新的分片(rs3) 谢谢!

1 个答案:

答案 0 :(得分:1)

我发现了我的问题。这是三个Mongo配置服务器中的一个未正确配置所有主机。我重新配置我的/ etc / hosts和stepDown()主配置服务器。而且我看到它再次正常工作。它消耗大量时间。我滥用了当服务器因电动而死机时,它无法从以前的状态恢复。