我在 s3 存储桶的文件夹中有一个压缩文件。我想使用 boto3 解压缩文件。到目前为止,这是我的代码。
def unzip_file(path, file_name):
s3 = boto3.resource('s3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
my_bucket = s3.Bucket(BUCKET)
lst = list(my_bucket.objects.filter(Prefix=path))
unzip_path = '/'.join(str(lst[0].key).split('/')[:-1])
with zipfile.ZipFile(f"{path}/{file_name}", 'r') as zip_ref:
zip_ref.extractall(unzip_path)
但这只是给出如下错误
Traceback (most recent call last):
File "download.py", line 153, in <module>
unzip_file(path, file_name)
File "download.py", line 32, in unzip_file
with zipfile.ZipFile(f"{path}/{file_name}", 'r') as zip_ref:
File "/Users/sashaanksekar/anaconda3/lib/python3.8/zipfile.py", line 1250, in __init__
self.fp = io.open(file, filemode)
FileNotFoundError: [Errno 2] No such file or directory: 'test_parent/test_num/test.zip'
如何使用 python 和 boto3 解压文件?
我已经编辑了代码,压缩文件现在在内存中。如何将所有文件提取到 S3 中。
这是我的代码
def unzip_file(r, path, file_name):
s3 = boto3.resource('s3', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
my_bucket = s3.Bucket(BUCKET)
if r.status_code == 200:
filebytes = BytesIO(r.content)
file = zipfile.ZipFile(filebytes)
extract_folder = f"{path}extract_test/"
# extract each file in file.namelist() and save in extract_folder here
答案 0 :(得分:0)
由于我不确定 public static List<GetProducts_Result> CreateFromReader(DbDataReader reader)
{
List<GetProducts_Result> result = new List<GetProducts_Result>();
while (reader.Read())
{
result.Add(new GetProducts_Result
{
Name = reader.GetString(reader.GetOrdinal("Name")),
Brand = reader.GetString(reader.GetOrdinal("Brand")),
... // read remaining fields
}
return result;
}
是什么以及函数背后的逻辑,我提供了一个工作示例:
r.content