在Lambda上使用python从S3逐行处理CSV

时间:2019-05-04 06:24:20

标签: amazon-s3 aws-lambda

我正在尝试使用AWS Lambda(Python)处理S3存储桶上的.csv(30MB)文件。我在本地编写了python代码以处理文件,现在尝试使用Lambda执行。很难逐行读取文件。

请让我知道如何使用boto3或s3方法逐行遍历文件。请尽快帮助我。谢谢

在Lambda中:

s3 = boto3.client("s3")
        file_obj = event["Records"][0]
        filename=str(file_obj['s3']['object']['key'])
        #print('file name is :', filename)
        fileObj = s3.get_object(Bucket=<mybucket>, Key=filename)
        file_content = fileObj["Body"].read().decode('utf-8')

我的原始代码:

import csv
import pandas as pd
import datetime
#from datetime import datetime,timedelta
import numpy as np
with open ('sample.csv', 'r') as file_name:

    csv_reader = csv.reader(file_name, delimiter=',')
    Time = []
    Latitude=[]
    Longitude= []
    Org_Units=[]
    Org_Unit_Type =[]
    Variable_Name=[]
    #New columns
    Year=[]
    Month= []
    Day =[]
    Celsius=[]
    Far=[]
    Conv_Units=[]
    Conv_Unit_Type=[]
    header = ['Time','Latitude', 'Longitude','Org_Units','Org_Unit_Type','Conv_Units','Conv_Unit_Type','Variable_Name']
    out_filename = 'Write' + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") #need to rename based on the org file name

    with open(out_filename +'.csv', 'w') as csvFile:
        outputwriter = csv.writer(csvFile, delimiter=',')
        outputwriter.writerow(header)
        next(csv_reader, None)  # avoid hearder

        for row in csv_reader:
           # print(row)
            Time = row[0]
            Org_Lat=row[1]
            Org_Long=row[2]
            Org_Units=row[3]
            Org_Unit_Type =row[4]
            Variable_Name=row[5]
            # print(Time,Org_Lat,Org_Long,Org_Units,Org_Unit_Type,Variable_Name)

            if Org_Unit_Type == 'm s-1':
                Conv_Units =round(float(Org_Units) * 1.151,2)
                Conv_Unit_Type = 'miles'
            if Org_Unit_Type == 'm':
                Conv_Units =round(float(Org_Units) / 1609.344,2)
                 # print (Org_Units,Conv_Units)
                Conv_Unit_Type = 'miles'
            if Org_Unit_Type == 'Pa':
                Conv_Units =round(float(Org_Units) / 6894.757,2)
                Conv_Unit_Type = 'Psi'
                #print(type(Time))
            date_time_obj = datetime.datetime.strptime(Time, '%m-%d-%Y, %H:%M')
             #  Year = time.strptime(date_time_obj, "%B")
            #print(date_time_obj)
            f_row =[Time,Latitude,Longitude,Org_Units,Org_Unit_Type,Conv_Units,Conv_Unit_Type,Variable_Name]
            outputwriter.writerow(f_row)
csvFile.close()
print("done")

2 个答案:

答案 0 :(得分:0)

我认为这应该起作用,您唯一需要检查的是您的lambda需要具有对s3存储桶具有读取访问权限的策略角色。 最初,为了进行测试,我将在s3上完全访问lambda AmazonS3FullAccess

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

python代码

s3 = boto3.client('s3')
def lambda_handler(event, context):
    # Get the object from the event and show its content type
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key'].encode('utf8')
    obj = s3.get_object(Bucket=bucket, Key=key)
    rows    = obj['Body'].read().split('\n')
    print("rows" + rows)

答案 1 :(得分:0)

您可能会发现将对象下载到本地存储会更容易,而不是使用.read()来将对象读取为流:

s3_client = boto3.client('s3', region='ap-southeast-2')
s3_client.download_file(bucket, key, '/tmp/local_file.csv')

然后您可以使用原始程序来处理文件。

完成后,请确保删除临时文件,因为可能会重用AWS Lambda容器并且只有500MB的磁盘空间可用。