可以将猫鼬大数据保存到多个文件中

时间:2019-09-06 08:36:22

标签: node.js mongodb mongoose

是否可以将猫鼬大数据存储到多个文档中,但应归为一个集合

例如

->集合->客户,它具有基于ObjectId的文档。

所以我要寻找的是有可能将数据存储到多个文档中,但是所有文档都应该链接在一起吗?我的问题是服务器“当客户提出非常大的请求并保存数据时引发错误,因此我需要解决方案来存储大数据。

drives:[{
name:string,
driveID:string,
phases:[] // this contain sub arrays and can be upto 20k lines of JSON because it contains a questionnaire
}]

例如

"phases":[
{
"phase":"abc"
  "phase" : "driveOne",
   "text" : "Drive One",
    "textKey" : "",
    "index" : "1.0",
    "subjects":[
    "id:"1.1.1",
    "text":"test",
    "textKey":"",
]
},

// for demo purpose I am copying same data...
{
"phase":"abc"
  "phase" : "driveOne",
   "text" : "Drive One",
    "textKey" : "",
    "index" : "1.0",
    "subjects":[
    "id:"1.1.1",
    "text":"test",
    "textKey":"",
]
},
{
"phase":"abc"
  "phase" : "driveOne",
   "text" : "Drive One",
    "textKey" : "",
    "index" : "1.0",
    "subjects":[
    "id:"1.1.1",
    "text":"test",
    "textKey":"",
]
},
{
"phase":"abc"
  "phase" : "driveOne",
   "text" : "Drive One",
    "textKey" : "",
    "index" : "1.0",
    "subjects":[
    "id:"1.1.1",
    "text":"test",
    "textKey":"",
]
},

]

2 个答案:

答案 0 :(得分:0)

在这里您可以在for循环中更新文档。

在我的示例中,根据您的数据库更改必要的关键字:

userController.signUp = async function(req,res){
    req.body.installation = [
    {
        "phase":"abc",
        "phase" : "driveOne",
        "text" : "Drive One",
        "textKey" : "",
        "index" : "1.0",
        "subjects":[{
        "id":"1.1.1",
        "text":"test",
        "textKey":"",
        }]
    },


    {
        "phase":"abc",
        "phase" : "driveOne",
        "text" : "Drive One",
        "textKey" : "",
        "index" : "1.0",
        "subjects":[{
        "id":"1.1.1",
        "text":"test",
        "textKey":"",
        }]
    },
    {
        "phase":"abc",
        "phase" : "driveOne",
        "text" : "Drive One",
        "textKey" : "",
        "index" : "1.0",
        "subjects":[{
        "id":"1.1.1",
        "text":"test",
        "textKey":"",
        }]
    },
    {
        "phase":"abc",
        "phase" : "driveOne",
        "text" : "Drive One",
        "textKey" : "",
        "index" : "1.0",
        "subjects":[{
        "id":"1.1.1",
        "text":"test",
        "textKey":"",
        }]
    },

    ]
    //Your DB name instead of userModel
    userModel.findOne({_id: "5d1d9691019db61515450574"})
    .exec(async (err , found)=>{
        if(err){
            res.send(err);
        }else{
            for(var i = 0; i< req.body.installation.length; i++){
                var hey = req.body.installation[i];
                found.installation[i] = req.body.installation[i];
                console.log("=============================+>" ,found.installation)
                await userModel.findOneAndUpdate({_id: found._id} , found, {upsert: true , new: true});

            }
            res.send(found);
        }
    });

}

我尝试使用20万条记录,但没有给出错误。

答案 1 :(得分:0)

MongoDb可以处理大量请求。如果您的服务器具有较高的配置,例如RAM> = 8GB,则可以正常工作。单个文档的问题是它最多可以存储16 MB数据。如果超过了,那么我们必须再次将其分片。一种简单的解决方案是将数据存储在多个文档中。如果您一次要获取大量数据,则必须对其进行分页。对于总大小等计算,则可以使用聚合查询。我已经对单个集合中的9,00,000个文档进行了测试。我的汇总查询在300毫秒内响应,而普通查找需要2.20到3分钟。

分页e:g-

 req.body.pageNo = 1
 req.body.pageSize = 100

  .skip(req.body.pageSize * ((req.body.pageNo === 0 ? 1 :  req.body.pageNo) - 1))
  .limit(req.body.pageSize)