我在MongoDB中有一些看起来像这样的文件:
{type: type1, version: 2, data: ...}
{type: type1, version: 3, data: ...}
{type: type2, version: 1, data: ...}
{type: type2, version: 2, data: ...}
...
我想更新匹配类型 AND 版本的数据,或者在版本< 时为其创建新文档/ em>不匹配但希望禁止使用新的类型创建新文档 当我这样做时:
db.getCollection('products').update({"type": "unknown_type", "version" : "99"}, {$set: {"version": 99, "data": new data}}, {"upsert": true})
它会创建一个新文档:
{type: unknown_type, version: 99, data: ...}
这正是我想要禁止的。 有没有办法在一次通话中执行此操作?有没有办法限制某些字段的值?
答案 0 :(得分:1)
The best handling I can see for this use case is using "Bulk Operations" in order to send both "update" and "insert" commands in the same request. We also need to have a unique index in here to enforce that you actually do not create new combinations of the two fields.
Starting with these documents:
{ "type" : "type1", "version" : 2 }
{ "type" : "type1", "version" : 3 }
{ "type" : "type2", "version" : 1 }
{ "type" : "type2", "version" : 2 }
And creating a unique index on the two fields:
db.products.createIndex({ "type": 1, "version": 1 },{ "unique": true })
Then we try and do something that will actually insert, using the bulk operations for both the update and the insert:
db.products.bulkWrite(
[
{ "updateOne": {
"filter": { "type": "type3", "version": 1 },
"update": { "$set": { "data": {} } }
}},
{ "insertOne": {
"document": { "type": "type3", "version": 1, "data": { } }
}}
],
{ "ordered": false }
)
We should get a response like this:
{
"acknowledged" : true,
"deletedCount" : 0,
"insertedCount" : 1,
"matchedCount" : 0,
"upsertedCount" : 0,
"insertedIds" : {
"1" : ObjectId("594257b6fc2a40e470719470")
},
"upsertedIds" : {
}
}
Noting here the matchedCount
was 0
reflecting the "update" operation:
"matchedCount" : 0,
If I did the same thing again, with different data:
db.products.bulkWrite(
[
{ "updateOne": {
"filter": { "type": "type3", "version": 1 },
"update": { "$set": { "data": { "a": 1 } } }
}},
{ "insertOne": {
"document": { "type": "type3", "version": 1, "data": { "a": 1 } }
}}
],
{ "ordered": false }
)
Then we see:
BulkWriteError({
"writeErrors" : [
{
"index" : 1,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: test.products index: type_1_version_1 dup key: { : \"type3\", : 1.0 }",
"op" : {
"_id" : ObjectId("5942583bfc2a40e470719471"),
"type" : "type3",
"version" : 1,
"data" : {
"a" : 1
}
}
}
],
"writeConcernErrors" : [ ],
"nInserted" : 0,
"nUpserted" : 0,
"nMatched" : 1,
"nModified" : 1,
"nRemoved" : 0,
"upserted" : [ ]
})
Which is going to consistently throw an error in all drivers, but we can also see in the detail of the response:
"nMatched" : 1,
"nModified" : 1,
Which means that even though the "insert" failed, the "update" actually did it's job. The important thing to note here is that whilst "errors" can occur in the "batch", we can handle them when they are of the type predicted, which is the 11000
code for duplicate key errors that we expected.
So the end data of course looks like:
{ "type" : "type1", "version" : 2 }
{ "type" : "type1", "version" : 3 }
{ "type" : "type2", "version" : 1 }
{ "type" : "type2", "version" : 2 }
{ "type" : "type3", "version" : 1, "data" : { "a" : 1 } }
Which is what you wanted to achieve here.
So the operations will produce an exception, but by marking as "unordered" with the { "ordered": false }
option to .bulkWrite()
then it will at least commit any instructions that did not result in an error.
In this case, the typical result is that either the "insert" works and there is no update, or the "insert" fails where the "update" applies. When the fail is returned in the response, you can check the "index" of the error is 1
indicating the expected "insert" fail and that the error code is 11000
because of the expected "duplicate key".
The errors in the "expected" case can therefore be ignored and you would only need handle the "unexpected" errors for a different code an/or different position in the issued bulk instruction.