Firebase云功能,Firestore返回"截止日期超过"

时间:2017-10-09 21:37:10

标签: javascript firebase google-cloud-functions google-cloud-firestore

我从Firestore文档中获取了一个示例函数,并且能够从我的本地firebase环境成功运行它。但是,一旦我部署到我的firebase服务器,该功能就完成了,但是没有在firestore数据库中创建任何条目。 firebase功能日志显示"截止日期超过。"我有点困惑。任何人都知道为什么会这样,以及如何解决这个问题?

以下是示例函数:

exports.testingFunction = functions.https.onRequest((request, response) => {
var data = {
    name: 'Los Angeles',
    state: 'CA',
    country: 'USA'
};

// Add a new document in collection "cities" with ID 'DC'
var db = admin.firestore();
var setDoc = db.collection('cities').doc('LA').set(data);

response.status(200).send();
});

5 个答案:

答案 0 :(得分:6)

Firestore有限制。

由于其限制,可能会发生“超出截止日期”。

看到这个。 https://firebase.google.com/docs/firestore/quotas

  

每秒1次文档的最大写入速率

https://groups.google.com/forum/#!msg/google-cloud-firestore-discuss/tGaZpTWQ7tQ/NdaDGRAzBgAJ

答案 1 :(得分:2)

我写过这个小脚本,它使用批量写入(最多500个),只能一个接一个地写一个。

首先创建一个batchWorker let batch: any = new FbBatchWorker(db);来使用它 然后向工作人员batch.set(ref.doc(docId), MyObject);添加任何内容。并通过batch.commit()完成。 api与普通Firestore Batch(https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes)相同但是,目前它只支持set

import {firestore} from "firebase-admin";


export default class FbBatchWorker {

    db: firestore.Firestore;
    batchList: FirebaseFirestore.WriteBatch[] = [];
    elemCount: number = 0;

    constructor(db: firestore.Firestore) {
        this.db = db;
        this.batchList.push(this.db.batch());
    }

    async commit(): Promise<any> {
        let batchProms: Promise<any>[] = [];

        for (let _batch of this.batchList) {
            (await _batch.commit());
            console.log("finished writing batch");
            // batchProms.push(_batch.commit());
        }

        // return Promise.all(batchProms);
        return Promise.resolve("yeah");
    }

    set(dbref: FirebaseFirestore.DocumentReference, data: any): void {
        this.elemCount = this.elemCount + 1;
        if (this.elemCount % 490 === 0) {
            this.batchList.push(this.db.batch());
        }
        this.batchList[this.batchList.length - 1].set(dbref, data);
    }

}

答案 2 :(得分:0)

以我自己的经验,当您尝试使用不良的Internet连接编写文档时,也会发生此问题。

我使用类似于Jurgen的建议的解决方案一次批量插入小于500个文档,如果我使用的连接不稳定,则会出现此错误。当我插入电缆时,具有相同数据的相同脚本运行无误。

答案 3 :(得分:0)

如果错误在大约10秒钟后生成,则可能不是您的Internet连接,可能是您的函数未返回任何承诺。根据我的经验,我得到错误的原因仅仅是因为我将Firebase set操作(返回了一个Promise)包装在另一个Promise中。 你可以做到

return db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
        var SuccessResponse = {
            "code": "200"
        }

        var resp = JSON.stringify(SuccessResponse);
        return resp;
    }).catch(err => {
        console.log('Quiz Error OCCURED ', err);
        var FailureResponse = {
            "code": "400",
        }

        var resp = JSON.stringify(FailureResponse);
        return resp;
    });

代替

return new Promise((resolve,reject)=>{ 
    db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
        var SuccessResponse = {
            "code": "200"
        }

        var resp = JSON.stringify(SuccessResponse);
        return resp;
    }).catch(err => {
        console.log('Quiz Error OCCURED ', err);
        var FailureResponse = {
            "code": "400",
        }

        var resp = JSON.stringify(FailureResponse);
        return resp;
    });

});

答案 4 :(得分:0)

我通过使用15个并发的AWS Lambda函数将10,000个请求写入数据库的不同集合/文档毫秒部分来测试了这一点。我没有收到DEADLINE_EXCEEDED错误。

请参阅firebase上的文档。

'超过最后期限':期限已过期,操作才能完成。对于更改系统状态的操作,即使操作成功完成,也可能会返回此错误。例如,来自服务器的成功响应可能会被延迟足够长的时间,以至于截止期限到期。

在我们的情况下,我们正在写入少量数据,并且在大多数情况下都可以工作,但是丢失数据是不可接受的。我还没有得出结论,为什么Firestore无法写入简单的少量数据。

解决方案:

我正在使用使用SQS事件触发器的AWS Lambda函数。

  # This function receives requests from the queue and handles them
  # by persisting the survey answers for the respective users.
  QuizAnswerQueueReceiver:
    handler: app/lambdas/quizAnswerQueueReceiver.handler
    timeout: 180 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
    reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit    
    events:
      - sqs:
          batchSize: 10 # Wait for 10 messages before processing.
          maximumBatchingWindow: 60 # The maximum amount of time in seconds to gather records before invoking the function
          arn:
            Fn::GetAtt:
              - SurveyAnswerReceiverQueue
              - Arn
    environment:
      NODE_ENV: ${self:custom.myStage}

我正在使用连接到主队列的死信队列来处理失败的事件。

  Resources:
    QuizAnswerReceiverQueue:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: ${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}
        # VisibilityTimeout MUST be greater than the lambda functions timeout https://lumigo.io/blog/sqs-and-lambda-the-missing-guide-on-failure-modes/

        # The length of time during which a message will be unavailable after a message is delivered from the queue.
        # This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.
        VisibilityTimeout: 900 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.

        # The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1,209,600 seconds (14 days).
        MessageRetentionPeriod: 345600  # The number of seconds that Amazon SQS retains a message. 
        RedrivePolicy:
          deadLetterTargetArn:
            "Fn::GetAtt":
              - QuizAnswerReceiverQueueDLQ
              - Arn
          maxReceiveCount: 5 # The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
    QuizAnswerReceiverQueueDLQ:
      Type: "AWS::SQS::Queue"
      Properties:
        QueueName: "${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}DLQ"
        MessageRetentionPeriod: 1209600 # 14 days in seconds