通过AWS Lambda在AWS Redshift中插入数据

时间:2016-01-25 14:07:59

标签: python amazon-web-services amazon-s3 aws-lambda

我正在尝试执行以下操作:

当我在AWS S3中上传csv文件时,AWS Lambda需要检测它并在AWS Redshift中创建一个表并将数据存储在其中。这个过程没有lambda。但我想自动化它。

所以,我做了一个lambda函数来检测上传op csv文件等等..

现在经过一些错误解决后,我得到一个对我没有任何意义的错误..

Loading function
START RequestId: e8baee71-c36b-11e5-b1cb-87083ac95a25 Version: $LATEST
END RequestId: e8baee71-c36b-11e5-b1cb-87083ac95a25
REPORT RequestId: e8baee71-c36b-11e5-b1cb-87083ac95a25  Duration: 67.04 ms  Billed Duration: 100 ms     Memory Size: 512 MB Max Memory Used: 44 MB  

这是我的lambda python文件。它位于我的zip文件的根目录中。在zip文件中,他们是另一张地图'psycopg2'

from __future__ import print_function

import json
import urllib
import boto3
import psycopg2
import linecache

print('Loading function')

s3 = boto3.client('s3')


def lambda_handler(event, context):
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
    try:
        response = s3.get_object(Bucket=bucket, Key=key)

        #SEND MAIL WHEN CREATED

        #from = "email@gmail.com"
        #password = "password.email"
        #mail = smtplib.SMTP("smtp.gmail.com",587)
        #mail.ehlo()
        #mail.starttls()
        #mail.login(from,password)

        #recipient = "recipient.email"
        #mail.sendmail(from,recipient,key)


        #CREATE REDSHIFT TABLE WHEN CSV FILE UPLOADED
        if(key == "*.csv"):
            conn_string = "dbname=" + "xxxx" + " port=" + "5439" + " user=" + "yyyyy" + " password=" + "xxxxx*" + " host=" + "xxxxxxx.amazonaws.com";
            connection = psycopg2.connect(conn_string)
            cursor = connection.cursor();

            cursor.execute("select exists(select * from information_schema.tables where table_name=%s)", (key,))
            if(cursor.fetchone()[0]):
                return
            else:
                sqlcommand = 'create table ' + key + '('

                line = linecache.getline(key,1)
                line = line.replace(' ', '')
                line = line.replace('/', '')
                line = line.replace(':', '')
                line2 = linecache.getline(key,2)
                df1 = line
                df2 = line2
                output = ''
                output2 = ''
                for row1 in df1:
                    output = output + row1

                for row2 in df2:
                    output2 = output2 + row2

                new = output.split(',')
                new2 = output2.split(',')
                i = 0;
                for var in new:
                    new2[i] = new2[i].replace(' ', '')
                    sqlcommand = sqlcommand + var + ' ' + self._strType(new2[i])
                    i = i + 1;
                sqlcommand = sqlcommand[:-1]
                sqlcommand = sqlcommand + ');'

                cursor.execute(sqlcommand)
                connection.commit();

                print("CONTENT TYPE: " + response['ContentType'])
                return response['ContentType']
    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

1 个答案:

答案 0 :(得分:5)

这不是错误。这就是成功的样子。