boto3:将文件写入S3时多次执行Lambda函数

时间:2017-08-02 11:29:24

标签: python lambda boto3

我正在研究一个读取sqs消息并写入s3文件的实用程序。使用boto3相同。这有两个部分: 答:python客户端:调用Lambda函数。传递将用于s3对象名称的文件名。 B:Lambda函数,它读取sqs消息并将(100条记录)写入文件(在/ tmp文件夹中)并将其上传到s3存储桶。从sqs队列读取消息并写入文件后,它将从队列中删除。 lambda函数配置:256 Mb,5分钟超时。没有VPC。

当我从python客户端调用lambda函数时,lambda函数被执行多次。而不是100条记录,从sqs队列中删除200条记录。 在日志中获得以下错误:

[DEBUG] 2017-08-01T10:24:59.613Z 99fdbfb1-76a3-11e7-9b81-c76e4a7f8294 Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f78edd899a8>>

REPORT RequestId: 99fdbfb1-76a3-11e7-9b81-c76e4a7f8294 Duration: 29881.32 ms Billed Duration: 29900 ms Memory Size: 256 MB Max Memory Used: 46 MB 

[DEBUG] 2017-08-01T10:25:30.871Z ad06a4a7-76a3-11e7-b7d8-a7a246d0544c Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f78edd28770>>

REPORT RequestId: ad06a4a7-76a3-11e7-b7d8-a7a246d0544c Duration: 29847.54 ms Billed Duration: 29900 ms Memory Size: 256 MB Max Memory Used: 52 MB

如果我使用TEST按钮(从控制台)调用它正确执行的lambda函数,只执行一次(从队列中删除100条记录)。但仍然会出现错误一次。

有人可以建议如何解决这个问题吗?

python客户端

nooflambdafunc = 1
for i in range(0, nooflambdafunc):
    num = str(i)
    print("file no:", i)
    js = json.dumps({'fkey':num})
    #s3response = lambdaclient.invoke(FunctionName=sqstos3lambdafuncname,Payload=js)
    print("SQS to S3 lambda function invoked for .", js)

我的lambda函数

def lambda_handler(event, context):

    config = Config(region_name='us-east-2', connect_timeout=300, read_timeout=500)
    #SQS connection
    sqsconnclient = boto3.client('sqs', config=config)
    sqsconnresource = boto3.resource('sqs', config=config)
    sourcesqsn = "target"
    #sqs queue url
    queueurl = sqsconnclient.get_queue_url(QueueName=sourcesqsn)
    sqsstring = queueurl.get('QueueUrl')    

    #boto3 S3 connection
    s3connresource = boto3.resource('s3', config=config)
    sourcebucket = "testcy"
    s3connclient = boto3.client('s3', config=config)    

    #get payload from lambda invoke function 
    js1 = json.dumps(event, indent=2)
    resp = json.loads(js1)
    filekey = resp["fkey"]
    print("key is:", filekey)

    #get # msgs from sqs queue as defined in numofrecords.
    filenamectr = filekey #take input from controller
    key = str(sourcesqsn + str(filenamectr) + ".json")
    numofrecords = 100 #number of records to write to file.

    #lambda file name.     
    filename = '/tmp/' + key

    # using client conn, write msgs (as per numofrecords)to single file
    for i in range(0, numofrecords):
        messages = sqsconnclient.receive_message(QueueUrl=sqsstring)        
        if messages.get('Messages'):
            m = messages.get('Messages')[0]
            mbody = m['Body']
            msg_body = mbody.replace('\n',' ')            
            mreceipt_handle = m['ReceiptHandle']            
            writer = open(filename, 'a') #open file in /tmp/            
            writer.write(msg_body + '\n') #write each msg on new line
            sqsconnclient.delete_message(QueueUrl=sqsstring, ReceiptHandle=mreceipt_handle)
        time.sleep(0.25) 

    writer.close()
    time.sleep(0.25)

    print("File size: ", os.path.getsize(filename) >> 20)

    # write the file to s3 bucket. 
    print("uploading file to s3 bucket...", filename)
    s3connresource.meta.client.upload_file(filename, sourcebucket, key)
    print("file uploaded to s3 bucket.")

0 个答案:

没有答案