我正在尝试使用以下代码将位于AWS S3存储桶中的CSV文件作为pandas数据帧读入内存:
import pandas as pd
import boto
data = pd.read_csv('s3:/example_bucket.s3-website-ap-southeast-2.amazonaws.com/data_1.csv')
为了提供完整的访问权限,我在S3存储桶上设置了存储桶策略,如下所示:
{
"Version": "2012-10-17",
"Id": "statement1",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::example_bucket"
}
]
}
不幸的是我仍然在python中遇到以下错误:
boto.exception.S3ResponseError: S3ResponseError: 405 Method Not Allowed
想知道是否有人可以帮助解释如何在AWS S3中正确设置权限或正确配置pandas以导入文件。谢谢!
答案 0 :(得分:10)
使用pandas 0.20.3
import os
import boto3
import pandas as pd
import sys
if sys.version_info[0] < 3:
from StringIO import StringIO # Python 2.x
else:
from io import StringIO # Python 3.x
# get your credentials from environment variables
aws_id = os.environ['AWS_ID']
aws_secret = os.environ['AWS_SECRET']
client = boto3.client('s3', aws_access_key_id=aws_id,
aws_secret_access_key=aws_secret)
bucket_name = 'my_bucket'
object_key = 'my_file.csv'
csv_obj = client.get_object(Bucket=bucket_name, Key=object_key)
body = csv_obj['Body']
csv_string = body.read().decode('utf-8')
df = pd.read_csv(StringIO(csv_string))
答案 1 :(得分:4)
你不需要pandas ..你可以使用python的默认csv库
def read_file(bucket_name,region, remote_file_name, aws_access_key_id, aws_secret_access_key):
# reads a csv from AWS
# first you stablish connection with your passwords and region id
conn = boto.s3.connect_to_region(
region,
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
# next you obtain the key of the csv you want to read
# you will need the bucket name and the csv file name
bucket = conn.get_bucket(bucket_name, validate=False)
key = Key(bucket)
key.key = remote_file_name
data = key.get_contents_as_string()
key.close()
# you store it into a string, therefore you will need to split it
# usually the split characters are '\r\n' if not just read the file normally
# and find out what they are
reader = csv.reader(data.split('\r\n'))
data = []
header = next(reader)
for row in reader:
data.append(row)
return data
希望它能解决你的问题, 祝好运! :)
答案 2 :(得分:3)
我最终意识到您还需要为存储桶中的每个单独对象设置权限,以便使用以下代码将其解压缩:
from boto.s3.key import Key
k = Key(bucket)
k.key = 'data_1.csv'
k.set_canned_acl('public-read')
我还必须在pd.read_csv命令中修改存储桶的地址,如下所示:
data = pd.read_csv('https://s3-ap-southeast-2.amazonaws.com/example_bucket/data_1.csv')
答案 3 :(得分:0)
基于this answer的建议,建议使用smart_open
来读取S3,这就是我将其与Pandas结合使用的方式:
import os
import pandas as pd
from smart_open import smart_open
aws_key = os.environ['AWS_ACCESS_KEY']
aws_secret = os.environ['AWS_SECRET_ACCESS_KEY']
bucket_name = 'my_bucket'
object_key = 'my_file.csv'
path = 's3://{}:{}@{}/{}'.format(aws_key, aws_secret, bucket_name, object_key)
df = pd.read_csv(smart_open(path))
答案 4 :(得分:0)
您也可以尝试使用熊猫read_sql和pyathena:
from pyathena import connect
import pandas as pd
conn = connect(s3_staging_dir='s3://bucket/folder',region_name='region')
df = pd.read_sql('select * from database.table', conn) #don't change the "database.table"