我打算使用Python对存储在S3中的非常大的csv文件执行一些内存密集型操作,以将脚本移至AWS Lambda。我知道我可以读取整个csv nto内存,但是我肯定会遇到这么大的filem导致Lambda的内存和存储限制,有没有办法使用boto3一次流式传输或者只是一次读取csv的一部分/ botocore,理想情况下是通过分散行号以读入?
以下是我已经尝试过的一些东西:
1)使用range
中的S3.get_object
参数指定要读入的字节范围。不幸的是,这意味着最后一行在中间被切断,因为无法指定行数。有一些麻烦的解决方法,例如扫描最后一个换行符,记录索引,然后将其用作下一个字节范围的起点,但是如果可能的话,我想避免这种笨拙的解决方案。 / p>
2)使用S3 select编写sql查询,以选择性地从S3存储桶中检索数据。不幸的是,row_numbers
SQL函数不受支持,而且看起来好像没有一种方法可以读取一部分行。
答案 0 :(得分:3)
假设您的文件未压缩,这应涉及从流中读取并分割换行符。读取数据块,找到该数据块中换行符的最后一个实例,然后进行拆分和处理。
s3 = boto3.client('s3')
body = s3.get_object(Bucket=bucket, Key=key)['Body']
# number of bytes to read per chunk
chunk_size = 1000000
# the character that we'll split the data with (bytes, not string)
newline = '\n'.encode()
partial_chunk = b''
while (True):
chunk = partial_chunk + body.read(chunk_size)
# If nothing was read there is nothing to process
if chunk == b'':
break
last_newline = chunk.rfind(newline)
# write to a smaller file, or work against some piece of data
result = chunk[0:last_newline+1].decode('utf-8')
# keep the partial line you've read here
partial_chunk = chunk[last_newline+1:]
如果文件压缩,则需要在循环内使用BytesIO
和GzipFile
类;这是一个更困难的问题,因为您需要保留Gzip压缩详细信息。
答案 1 :(得分:0)
我开发了类似于@Kirk Broadhurst的代码,但是如果每个块的处理时间超过5分钟(大约),则发生连接超时。以下代码通过为每个块打开一个新连接来工作。
import boto3
import pandas as pd
import numpy as np
# The following credentials should not be hard coded, it's best to get these from cli.
region_name = 'region'
aws_access_key_id = 'aws_access_key_id'
aws_secret_access_key = 'aws_secret_access_key'
s3 =boto3.client('s3',region_name=region_name,aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
obj = s3.get_object(Bucket='bucket', Key='key')
total_bytes = obj['ContentLength']
chunk_bytes = 1024*1024*5 # 5 MB as an example.
floor = int(total_bytes//chunk_bytes)
whole = total_bytes/chunk_bytes
total_chunks = [1+floor if floor<whole else floor][0]
chunk_size_list = [(i*chunk_bytes, (i+1)*chunk_bytes-1) for i in range(total_chunks)]
a,b = chunk_size_list[-1]
b = total_bytes
chunk_size_list[-1] = (a,b)
chunk_size_list = [f'bytes={a}-{b}' for a,b in chunk_size_list]
prev_str = ''
for i,chunk in enumerate(chunk_size_list):
s3 = boto3.client('s3', region_name=region_name, aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
byte_obj = s3.get_object(Bucket='bucket', Key='key', Range=chunk_size_list[i])
byte_obj = byte_obj['Body'].read()
str_obj = byte_obj.decode('utf-8')
del byte_obj
list_obj = str_obj.split('\n')
# You can use another delimiter instead of ',' below.
if len(prev_str.split(',')) < len(list_obj[1].split(',')) or len(list_obj[0].split(',')) < len(list_obj[1].split(',')):
list_obj[0] = prev_str+list_obj[0]
else:
list_obj = [prev_str]+list_obj
prev_str = list_obj[-1]
del str_obj, list_obj[-1]
list_of_elements = [st.split(',') for st in list_obj]
del list_obj
df = pd.DataFrame(list_of_elements)
del list_of_elements
gc.collect()
# You can process your pandas dataframe here, but you need to cast it to correct datatypes.
# casting na values to numpy nan type.
na_values = ['', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan', '1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan', 'null']
df = df.replace(na_values, np.nan)
dtypes = {col1: 'float32', col2:'category'}
df = df.astype(dtype=dtypes, copy=False)