我只能访问S3存储桶中的特定目录。
例如,如果我尝试列出整个存储桶,则使用s3cmd
命令:
$ s3cmd ls s3://my-bucket-url
我收到错误:Access to bucket 'my-bucket-url' was denied
但是如果我尝试访问存储桶中的特定目录,我可以看到内容:
$ s3cmd ls s3://my-bucket-url/dir-in-bucket
现在我想用python boto连接到S3存储桶。与之相似:
bucket = conn.get_bucket('my-bucket-url')
我收到错误:boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
但如果我尝试:
bucket = conn.get_bucket('my-bucket-url/dir-in-bucket')
脚本停顿大约10秒钟,然后打印出错误。贝娄是完整的痕迹。知道如何处理这个吗?
Traceback (most recent call last):
File "test_s3.py", line 7, in <module>
bucket = conn.get_bucket('my-bucket-url/dir-name')
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 471, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 490, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 633, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1046, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 922, in _mexe
request.body, request.headers)
File "/usr/lib/python2.7/httplib.py", line 958, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 992, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 1157, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno -2] Name or service not known
答案 0 :(得分:27)
对于boto3
import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('my_bucket_name')
for object_summary in my_bucket.objects.filter(Prefix="dir_name/"):
print(object_summary.key)
答案 1 :(得分:23)
默认情况下,当您在boto中执行get_bucket
调用时,它会尝试通过对存储区URL执行HEAD
请求来验证您实际上是否有权访问该存储桶。在这种情况下,您不希望boto这样做,因为您无权访问存储桶本身。所以,这样做:
bucket = conn.get_bucket('my-bucket-url', validate=False)
然后你应该可以做这样的事情来列出对象:
for key in bucket.list(prefix='dir-in-bucket'):
<do something>
如果您仍然收到403 Errror,请尝试在前缀的末尾添加斜杠。
for key in bucket.list(prefix='dir-in-bucket/'):
<do something>
答案 2 :(得分:6)
Boto3客户端:
import boto3
_BUCKET_NAME = 'mybucket'
_PREFIX = 'subfolder/'
client = boto3.client('s3', aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
def ListFiles(client):
"""List files in specific S3 URL"""
response = client.list_objects(Bucket=_BUCKET_NAME, Prefix=_PREFIX)
for content in response.get('Contents', []):
yield content.get('Key')
file_list = ListFiles(client)
for file in file_list:
print 'File found: %s' % file
使用会话
from boto3.session import Session
_BUCKET_NAME = 'mybucket'
_PREFIX = 'subfolder/'
session = Session(aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY)
client = session.client('s3')
def ListFilesV1(client, bucket, prefix=''):
"""List files in specific S3 URL"""
paginator = client.get_paginator('list_objects')
for result in paginator.paginate(Bucket=bucket, Prefix=prefix,
Delimiter='/'):
for content in result.get('Contents', []):
yield content.get('Key')
file_list = ListFilesV1(client, _BUCKET_NAME, prefix=_PREFIX)
for file in file_list:
print 'File found: %s' % file
答案 3 :(得分:3)
我也遇到了同样的问题,这段代码可以解决问题。
import boto3
s3 = boto3.resource("s3")
s3_bucket = s3.Bucket("bucket-name")
dir = "dir-in-bucket"
files_in_s3 = [f.key.split(dir + "/")[1] for f in
s3_bucket.objects.filter(Prefix=dir).all()]
答案 4 :(得分:1)
这可以使用:
s3_client = boto3.client('s3')
objects = s3_client.list_objects_v2(Bucket='bucket_name')
for obj in objects['Contents']:
print(obj['Key'])
答案 5 :(得分:0)
如果要列出存储桶中文件夹的所有对象,可以在列出时指定它。
import boto
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(AWS_BUCKET_NAME)
for file in bucket.list("FOLDER_NAME/", "/"):
<do something with required file>
答案 6 :(得分:0)
以下代码将列出S3存储桶的特定目录中的所有文件:
import boto3
s3 = boto3.client('s3')
def get_all_s3_keys(s3_path):
"""
Get a list of all keys in an S3 bucket.
:param s3_path: Path of S3 dir.
"""
keys = []
if not s3_path.startswith('s3://'):
s3_path = 's3://' + s3_path
bucket = s3_path.split('//')[1].split('/')[0]
prefix = '/'.join(s3_path.split('//')[1].split('/')[1:])
kwargs = {'Bucket': bucket, 'Prefix': prefix}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
keys.append(obj['Key'])
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
return keys