假设我有一个包含这样文档的集合(只是简化示例,但它应该显示该方案):
> db.data.find()
{ "_id" : ObjectId("4e9c1f27aa3dd60ee98282cf"), "type" : "A", "value" : 11 }
{ "_id" : ObjectId("4e9c1f33aa3dd60ee98282d0"), "type" : "A", "value" : 58 }
{ "_id" : ObjectId("4e9c1f40aa3dd60ee98282d1"), "type" : "B", "value" : 37 }
{ "_id" : ObjectId("4e9c1f50aa3dd60ee98282d2"), "type" : "B", "value" : 1 }
{ "_id" : ObjectId("4e9c1f56aa3dd60ee98282d3"), "type" : "A", "value" : 85 }
{ "_id" : ObjectId("4e9c1f5daa3dd60ee98282d4"), "type" : "B", "value" : 12 }
现在我需要收集有关该集合的一些统计信息。例如:
db.data.mapReduce(function(){
emit(this.type,this.value);
},function(key,values){
var total = 0;
for(i in values) {total+=values[i]};
return total;
},
{out:'stat'})
将收集'stat'集合中的总计。
> db.stat.find()
{ "_id" : "A", "value" : 154 }
{ "_id" : "B", "value" : 50 }
此时一切都很完美,但我坚持下一步:
所以问题是:
是否可以选择仅在最后一个mapReduce运行增量mapReduce之后添加的文档,或者是否有另一种策略来更新不断增长的集合中的统计数据?
答案 0 :(得分:4)
您可以使用_id.getTime()
(来自:http://api.mongodb.org/java/2.6/org/bson/types/ObjectId.html)获取ID的时间部分。这应该可以在所有分片中排序。
答案 1 :(得分:4)
您可以缓存时间并将其用作下一次增量map-reduce的屏障。
我们正在测试这项工作,似乎正在发挥作用。如果我错了,请纠正我,但是当跨分片发生插入时,你无法安全地执行map-reduce。版本变得不一致,map-reduce操作将失败。 (如果您找到解决方案,请告诉我!)
我们使用批量插入,每5分钟一次。完成所有批量插入后,我们像这样运行map-reduce(在Python中):
m = Code(<map function>)
r = Code(<reduce function>)
# pseudo code
end = last_time + 5 minutes
# Use time and optionally any other keys you need here
q = bson.SON([("date" : {"$gte" : last_time, "$lt" : end})])
collection.map_reduce(m, r, out=out={"reduce": <output_collection>}, query=q)
请注意,我们使用的是reduce
而不是merge
,因为我们不想覆盖之前的内容;我们希望将旧结果和新结果与相同的reduce函数结合起来。
答案 2 :(得分:2)
我编写了一个完整的基于pymongo的解决方案,它使用增量map-reduce并缓存时间,并期望在cron作业中运行。它锁定自己,因此两个不能同时运行:
https://gist.github.com/2233072
""" This method performs an incremental map-reduce on any new data in 'source_table_name'
into 'target_table_name'. It can be run in a cron job, for instance, and on each execution will
process only the new, unprocessed records.
The set of data to be processed incrementally is determined non-invasively (meaning the source table is not
written to) by using the queued_date field 'source_queued_date_field_name'. When a record is ready to be processed,
simply set its queued_date (which should be indexed for efficiency). When incremental_map_reduce() is run, any documents
with queued_dates between the counter in 'counter_key' and 'max_datetime' will be map/reduced.
If reset is True, it will drop 'target_table_name' before starting.
If max_datetime is given, it will only process records up to that date.
If limit_items is given, it will only process (roughly) that many items. If multiple
items share the same date stamp (as specified in 'source_queued_date_field_name') then
it has to fetch all of those or it'll lose track, so it includes them all.
If unspecified/None, counter_key defaults to counter_table_name:LastMaxDatetime.
"""
答案 3 :(得分:0)
我们使用'normalized'ObjectIds解决了这个问题。我们正在做的步骤:
new ObjectId(objectId.Timestamp,
0, short.MinValue, 0)
注意:某些边界项将被多次处理。为了解决这个问题,我们在处理过的项目中设置了一些标志。