我对s3cmd非常满意,但有一个问题:如何将所有文件从一个S3存储桶复制到另一个?它甚至可能吗?
编辑:我找到了一种使用Python和boto在文件夹之间复制文件的方法:from boto.s3.connection import S3Connection
def copyBucket(srcBucketName, dstBucketName, maxKeys = 100):
conn = S3Connection(awsAccessKey, awsSecretKey)
srcBucket = conn.get_bucket(srcBucketName);
dstBucket = conn.get_bucket(dstBucketName);
resultMarker = ''
while True:
keys = srcBucket.get_all_keys(max_keys = maxKeys, marker = resultMarker)
for k in keys:
print 'Copying ' + k.key + ' from ' + srcBucketName + ' to ' + dstBucketName
t0 = time.clock()
dstBucket.copy_key(k.key, srcBucketName, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maxKeys:
print 'Done'
break
resultMarker = keys[maxKeys - 1].key
同步几乎和复制一样简单。 ETag,大小和最后修改的字段可用于密钥。
也许这对其他人也有帮助。
答案 0 :(得分:87)
s3cmd sync s3://from/this/bucket/ s3://to/this/bucket/
有关可用选项,请使用:
$s3cmd --help
答案 1 :(得分:41)
AWS CLI似乎完美地完成了这项工作,并且有额外的官方支持工具。
aws s3 sync s3://mybucket s3://backup-mybucket
http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
答案 2 :(得分:29)
在我写这篇文章时,回答最多的是这一个:
s3cmd sync s3://from/this/bucket s3://to/this/bucket
这是一个有用的答案。但有时同步不是你需要的(它删除文件等)。我花了很长时间才弄清楚这种非脚本替代方案,只需在桶之间复制多个文件。 (好吧,在下面显示的情况下,它不在存储桶之间。它位于非真实文件夹之间,但它在存储桶之间同样有效。)
# Slightly verbose, slightly unintuitive, very useful:
s3cmd cp --recursive --exclude=* --include=file_prefix* s3://semarchy-inc/source1/ s3://semarchy-inc/target/
上述命令的说明:
s3://sourceBucket/ s3://targetBucket/
s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
答案 3 :(得分:8)
我需要复制一个非常大的存储桶,因此我将问题中的代码调整为多线程版本并将其放在GitHub上。
答案 4 :(得分:8)
您也可以使用网络界面执行此操作:
那就是它。
答案 5 :(得分:3)
这实际上是可能的。这对我有用:
import boto
AWS_ACCESS_KEY = 'Your access key'
AWS_SECRET_KEY = 'Your secret key'
conn = boto.s3.connection.S3Connection(AWS_ACCESS_KEY, AWS_SECRET_KEY)
bucket = boto.s3.bucket.Bucket(conn, SRC_BUCKET_NAME)
for item in bucket:
# Note: here you can put also a path inside the DEST_BUCKET_NAME,
# if you want your item to be stored inside a folder, like this:
# bucket.copy(DEST_BUCKET_NAME, '%s/%s' % (folder_name, item.key))
bucket.copy(DEST_BUCKET_NAME, item.key)
答案 6 :(得分:3)
mdahlman的代码对我没用,但是这个命令会将bucket1中的所有文件复制到存储桶2中的新文件夹(命令也会创建这个新文件夹)。
cp --recursive --include=file_prefix* s3://bucket1/ s3://bucket2/new_folder_name/
答案 7 :(得分:2)
谢谢 - 我使用稍微修改过的版本,我只复制不存在或大小不同的文件,如果源中存在密钥则检查目标。我发现这对准备测试环境要快一点:
def botoSyncPath(path):
"""
Sync keys in specified path from source bucket to target bucket.
"""
try:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
srcBucket = conn.get_bucket(AWS_SRC_BUCKET)
destBucket = conn.get_bucket(AWS_DEST_BUCKET)
for key in srcBucket.list(path):
destKey = destBucket.get_key(key.name)
if not destKey or destKey.size != key.size:
key.copy(AWS_DEST_BUCKET, key.name)
for key in destBucket.list(path):
srcKey = srcBucket.get_key(key.name)
if not srcKey:
key.delete()
except:
return False
return True
答案 8 :(得分:2)
我编写了一个备份S3存储桶的脚本:https://github.com/roseperrone/aws-backup-rake-task
#!/usr/bin/env python
from boto.s3.connection import S3Connection
import re
import datetime
import sys
import time
def main():
s3_ID = sys.argv[1]
s3_key = sys.argv[2]
src_bucket_name = sys.argv[3]
num_backup_buckets = sys.argv[4]
connection = S3Connection(s3_ID, s3_key)
delete_oldest_backup_buckets(connection, num_backup_buckets)
backup(connection, src_bucket_name)
def delete_oldest_backup_buckets(connection, num_backup_buckets):
"""Deletes the oldest backup buckets such that only the newest NUM_BACKUP_BUCKETS - 1 buckets remain."""
buckets = connection.get_all_buckets() # returns a list of bucket objects
num_buckets = len(buckets)
backup_bucket_names = []
for bucket in buckets:
if (re.search('backup-' + r'\d{4}-\d{2}-\d{2}' , bucket.name)):
backup_bucket_names.append(bucket.name)
backup_bucket_names.sort(key=lambda x: datetime.datetime.strptime(x[len('backup-'):17], '%Y-%m-%d').date())
# The buckets are sorted latest to earliest, so we want to keep the last NUM_BACKUP_BUCKETS - 1
delete = len(backup_bucket_names) - (int(num_backup_buckets) - 1)
if delete <= 0:
return
for i in range(0, delete):
print 'Deleting the backup bucket, ' + backup_bucket_names[i]
connection.delete_bucket(backup_bucket_names[i])
def backup(connection, src_bucket_name):
now = datetime.datetime.now()
# the month and day must be zero-filled
new_backup_bucket_name = 'backup-' + str('%02d' % now.year) + '-' + str('%02d' % now.month) + '-' + str(now.day);
print "Creating new bucket " + new_backup_bucket_name
new_backup_bucket = connection.create_bucket(new_backup_bucket_name)
copy_bucket(src_bucket_name, new_backup_bucket_name, connection)
def copy_bucket(src_bucket_name, dst_bucket_name, connection, maximum_keys = 100):
src_bucket = connection.get_bucket(src_bucket_name);
dst_bucket = connection.get_bucket(dst_bucket_name);
result_marker = ''
while True:
keys = src_bucket.get_all_keys(max_keys = maximum_keys, marker = result_marker)
for k in keys:
print 'Copying ' + k.key + ' from ' + src_bucket_name + ' to ' + dst_bucket_name
t0 = time.clock()
dst_bucket.copy_key(k.key, src_bucket_name, k.key)
print time.clock() - t0, ' seconds'
if len(keys) < maximum_keys:
print 'Done backing up.'
break
result_marker = keys[maximum_keys - 1].key
if __name__ =='__main__':main()
我在rake任务中使用它(对于Rails应用程序):
desc "Back up a file onto S3"
task :backup do
S3ID = "*****"
S3KEY = "*****"
SRCBUCKET = "primary-mzgd"
NUM_BACKUP_BUCKETS = 2
Dir.chdir("#{Rails.root}/lib/tasks")
system "./do_backup.py #{S3ID} #{S3KEY} #{SRCBUCKET} #{NUM_BACKUP_BUCKETS}"
end
答案 9 :(得分:1)
s3cmd不会只带有前缀或通配符,但您可以使用's3cmd ls sourceBucket'编写脚本,并使用awk提取对象名称。然后使用's3cmd cp sourceBucket / name destBucket'复制列表中的每个对象名称。
我在Windows上的DOS框中使用这些批处理文件:
s3list.bat
s3cmd ls %1 | gawk "/s3/{ print \"\\"\"\"substr($0,index($0,\"s3://\"))\"\\"\"\"; }"
s3copy.bat
@for /F "delims=" %%s in ('s3list %1') do @s3cmd cp %%s %2
答案 10 :(得分:1)
您也可以使用使用多线程的s3funnel:
https://github.com/neelakanta/s3funnel
示例(未显示访问密钥或密钥参数):
s3funnel source-bucket-name list | s3funnel dest-bucket-name copy --source-bucket source-bucket-name --threads = 10