使用boto3列出存储桶的内容

时间:2015-05-14 23:22:56

标签: python amazon-s3 boto boto3

如何通过boto3查看S3中存储桶中的内容? (即做一个"ls")?

执行以下操作:

import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')

返回:

s3.Bucket(name='some/path/')

如何查看其内容?

17 个答案:

答案 0 :(得分:149)

查看内容的一种方法是:

<div>Click and hold me...</div>
<div>Click and hold me...</div>
<div>Click and hold me...</div>

答案 1 :(得分:71)

这类似于'ls',但它没有考虑前缀文件夹约定,并将列出存储桶中的对象。这取决于读者过滤掉作为密钥名称一部分的前缀。

在Python 2中:

from boto.s3.connection import S3Connection

conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
    print(obj.key)

在Python 3中:

from boto3 import client

conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
    print(key['Key'])

答案 2 :(得分:21)

我假设您已单独配置身份验证。

import boto3
s3 = boto3.resource('s3')

my_bucket = s3.Bucket('bucket_name')

for file in my_bucket.objects.all():
    print file.key

答案 3 :(得分:21)

如果你想传递ACCESS和SECRET键(你不应该这样做,因为它不安全):

from boto3.session import Session

ACCESS_KEY='your_access_key'
SECRET_KEY='your_secret_key'

session = Session(aws_access_key_id=ACCESS_KEY,
                  aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
your_bucket = s3.Bucket('your_bucket')

for s3_file in your_bucket.objects.all():
    print(s3_file.key)

答案 4 :(得分:12)

为了处理大型键列表(即,当目录列表大于1000个项目时),我使用以下代码来累加具有多个列表的键值(即文件名)(感谢上面的第一行Amelio)。代码适用于python3:

    from boto3  import client
    bucket_name = "my_bucket"
    prefix      = "my_key/sub_key/lots_o_files"

    s3_conn   = client('s3')  # type: BaseClient  ## again assumes boto.cfg setup, assume AWS S3
    s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter = "/")

    if 'Contents' not in s3_result:
        #print(s3_result)
        return []

    file_list = []
    for key in s3_result['Contents']:
        file_list.append(key['Key'])
    print(f"List count = {len(file_list)}")

    while s3_result['IsTruncated']:
        continuation_key = s3_result['NextContinuationToken']
        s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter="/", ContinuationToken=continuation_key)
        for key in s3_result['Contents']:
            file_list.append(key['Key'])
        print(f"List count = {len(file_list)}")
    return file_list

答案 5 :(得分:6)

我的s3 keys utility function本质上是@Hephaestus答案的优化版本:

import boto3


s3_paginator = boto3.client('s3').get_paginator('list_objects_v2')


def keys(bucket_name, prefix='/', delimiter='/', start_after=''):
    prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
    start_after = (start_after or prefix) if prefix.endswith(delimiter) else start_after
    for page in s3_paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
        for content in page.get('Contents', ()):
            yield content['Key']

在我的测试中(boto3 1.9.84),它比等效(但更简单)的代码快得多:

import boto3


def keys(bucket_name, prefix='/', delimiter='/'):
    prefix = prefix[1:] if prefix.startswith(delimiter) else prefix
    bucket = boto3.resource('s3').Bucket(bucket_name)
    return (_.key for _ in bucket.objects.filter(Prefix=prefix))

S3 guarantees UTF-8 binary sorted results一样,start_after优化已添加到第一个函数中。

答案 6 :(得分:5)

一种更简约的方式,而不是通过for循环迭代,你也可以打印包含S3存储桶中所有文件的原始对象:

session = Session(aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket_name')

files_in_s3 = bucket.objects.all() 
#you can print this iterable with print(list(files_in_s3))

答案 7 :(得分:2)

对象摘要:

ObjectSummary附带有两个标识符:

  • 存储桶名称

boto3 S3: ObjectSummary

AWS S3文档中有关对象键的更多信息:

  

对象键:

     

创建对象时,请指定键名,该键名唯一标识存储桶中的对象。例如,在Amazon S3控制台(请参阅AWS管理控制台)中,突出显示存储桶时,将显示存储桶中的对象列表。这些名称是对象键。密钥的名称是一系列Unicode字符,其UTF-8编码最长为1024个字节。

     

Amazon S3数据模型是一个平面结构:创建一个存储桶,该存储桶存储对象。没有子桶或子文件夹的层次结构;但是,您可以像Amazon S3控制台一样使用键名前缀和定界符来推断逻辑层次结构。 Amazon S3控制台支持文件夹的概念。假设您的存储桶(由管理员创建)具有四个带有以下对象键的对象:

     

Development / Projects1.xls

     

财务/声明1.pdf

     

私人/taxdocument.pdf

     

s3-dg.pdf

     

参考:

     

AWS S3: Object Keys

下面是一些示例代码,演示了如何获取存储桶名称和对象密钥。

示例:

import boto3
from pprint import pprint

def main():

    def enumerate_s3():
        s3 = boto3.resource('s3')
        for bucket in s3.buckets.all():
             print("Name: {}".format(bucket.name))
             print("Creation Date: {}".format(bucket.creation_date))
             for object in bucket.objects.all():
                 print("Object: {}".format(object))
                 print("Object bucket_name: {}".format(object.bucket_name))
                 print("Object key: {}".format(object.key))

    enumerate_s3()


if __name__ == '__main__':
    main()

答案 8 :(得分:2)

这是一个简单的函数,可向您返回所有文件或具有某些类型(例如“ json”,“ jpg”)的文件的文件名。

def get_file_list_s3(bucket, prefix="", file_extension=None):
            """Return the list of all file paths (prefix + file name) with certain type or all
            Parameters
            ----------
            bucket: str
                The name of the bucket. For example, if your bucket is "s3://my_bucket" then it should be "my_bucket"
            prefix: str
                The full path to the the 'folder' of the files (objects). For example, if your files are in 
                s3://my_bucket/recipes/deserts then it should be "recipes/deserts". Default : ""
            file_extension: str
                The type of the files. If you want all, just leave it None. If you only want "json" files then it
                should be "json". Default: None       
            Return
            ------
            file_names: list
                The list of file names including the prefix
            """
            import boto3
            s3 = boto3.resource('s3')
            my_bucket = s3.Bucket(bucket)
            file_objs =  my_bucket.objects.filter(Prefix=prefix).all()
            file_names = [file_obj.key for file_obj in file_objs if file_extension is not None and file_obj.key.split(".")[-1] == file_extension]
            return file_names

答案 9 :(得分:1)

我只是这样做,包括身份验证方法:

s3_client = boto3.client(
                's3',
                aws_access_key_id='access_key',
                aws_secret_access_key='access_key_secret',
                config=boto3.session.Config(signature_version='s3v4'),
                region_name='region'
            )

response = s3_client.list_objects(Bucket='bucket_name', Prefix=key)
if ('Contents' in response):
    # Object / key exists!
    return True
else:
    # Object / key DOES NOT exist!
    return False

答案 10 :(得分:1)

Content-Type

答案 11 :(得分:1)

在上面的注释之一中对@Hephaeastus的代码进行了很少的修改,编写了以下方法来列出给定路径中的文件夹和对象(文件)。与s3 ls命令类似。

from boto3 import session

def s3_ls(profile=None, bucket_name=None, folder_path=None):
    folders=[]
    files=[]
    result=dict()
    bucket_name = bucket_name
    prefix= folder_path
    session = boto3.Session(profile_name=profile)
    s3_conn   = session.client('s3')
    s3_result =  s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter = "/", Prefix=prefix)
    if 'Contents' not in s3_result and 'CommonPrefixes' not in s3_result:
        return []

    if s3_result.get('CommonPrefixes'):
        for folder in s3_result['CommonPrefixes']:
            folders.append(folder.get('Prefix'))

    if s3_result.get('Contents'):
        for key in s3_result['Contents']:
            files.append(key['Key'])

    while s3_result['IsTruncated']:
        continuation_key = s3_result['NextContinuationToken']
        s3_result = s3_conn.list_objects_v2(Bucket=bucket_name, Delimiter="/", ContinuationToken=continuation_key, Prefix=prefix)
        if s3_result.get('CommonPrefixes'):
            for folder in s3_result['CommonPrefixes']:
                folders.append(folder.get('Prefix'))
        if s3_result.get('Contents'):
            for key in s3_result['Contents']:
                files.append(key['Key'])

    if folders:
        result['folders']=sorted(folders)
    if files:
        result['files']=sorted(files)
    return result

这将列出给定路径中的所有对象/文件夹。默认情况下,Folder_path可以保留为None,方法将列出存储桶根的立即内容。

答案 12 :(得分:1)

因此,您要在boto3中要求aws s3 ls的等价物。这将列出所有顶级文件夹和文件。这是我能得到的最接近的结果。它仅列出所有顶级文件夹。令人惊讶的是,如此简单的操作有多么困难。

import boto3

def s3_ls():
  s3 = boto3.resource('s3')
  bucket = s3.Bucket('example-bucket')
  result = bucket.meta.client.list_objects(Bucket=bucket.name,
                                           Delimiter='/')
  for o in result.get('CommonPrefixes'):
    print(o.get('Prefix'))

答案 13 :(得分:1)

我曾经这样做的一种方式:

import boto3
s3 = boto3.resource('s3')
bucket=s3.Bucket("bucket_name")
contents = [_.key for _ in bucket.objects.all() if "subfolders/ifany/" in _.key]

答案 14 :(得分:0)

这是解决方案

import boto3

s3=boto3.resource('s3')
BUCKET_NAME = 'Your S3 Bucket Name'
allFiles = s3.Bucket(BUCKET_NAME).objects.all()
for file in allFiles:
    print(file.key)

答案 15 :(得分:0)

也可以按照以下步骤操作:

import React from 'react';
import {Header} from './header.js';
import {Searchbar} from './searchbar.js';
import {Thumbnails}from './Thumbnails.js';
import {connect} from 'react-redux';
import {Select} from './select.js';
import { Pagination } from './pagination.js';
import {Filterbar} from './filterbar.js';
import { popular } from '../redux/action.js';

class App extends React.Component {
    componentDidMount(){
        console.log("heloo");
        popular();
           
        }

    render(){
        return (
            <>
                <Header /> 
                <Select {...this.props}/>
                <Filterbar/>
                <Searchbar {...this.props}/> 
                <Thumbnails {...this.props}/>
                <Pagination {...this.props}/>       
            </>
        );
    }
}

function mapStateToProps(state){
    return {
        state:state
    }
}

export default connect(mapStateToProps)(App)

答案 16 :(得分:0)

import boto3
s3 = boto3.resource('s3')

## Bucket to use
my_bucket = s3.Bucket('city-bucket')

## List objects within a given prefix
for obj in my_bucket.objects.filter(Delimiter='/', Prefix='city/'):
  print obj.key

输出:

city/pune.csv
city/goa.csv