我正在使用python boto和threading快速从S3下载许多文件。我在我的程序中多次使用它,效果很好。但是,有一次它不起作用。在该步骤中,我尝试在32核计算机(Amazon EC2 cc2.8xlarge)上下载3,000个文件。
下面的代码实际上成功下载了每个文件(除了有时会出现重试不能解决的httplib.IncompleteRead错误)。但是,32个线程中只有10个实际终止,程序就会挂起。不知道为什么会这样。已下载所有文件,并且所有线程都已退出。当我下载较少的文件时,他们会执行其他步骤。我已经减少了使用单个线程下载所有这些文件(它工作但速度很慢)。任何见解都将非常感谢!
from boto.ec2.connection import EC2Connection
from boto.s3.connection import S3Connection
from boto.s3.key import Key
from boto.exception import BotoClientError
from socket import error as socket_error
from httplib import IncompleteRead
import multiprocessing
from time import sleep
import os
import Queue
import threading
def download_to_dir(keys, dir):
"""
Given a list of S3 keys and a local directory filepath,
downloads the files corresponding to the keys to the local directory.
Returns a list of filenames.
"""
filenames = [None for k in keys]
class DownloadThread(threading.Thread):
def __init__(self, queue, dir):
# call to the parent constructor
threading.Thread.__init__(self)
# create a connection to S3
connection = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
self.conn = connection
self.dir = dir
self.__queue = queue
def run(self):
while True:
key_dict = self.__queue.get()
print self, key_dict
if key_dict is None:
print "DOWNLOAD THREAD FINISHED"
break
elif key_dict == 'DONE': #last job for last worker
print "DOWNLOADING DONE"
break
else: #still work to do!
index = key_dict.get('idx')
key = key_dict.get('key')
bucket_name = key.bucket.name
bucket = self.conn.get_bucket(bucket_name)
k = Key(bucket) #clone key to use new connection
k.key = key.key
filename = os.path.join(dir, k.key)
#make dirs if don't exist yet
try:
f_dirname = os.path.dirname(filename)
if not os.path.exists(f_dirname):
os.makedirs(f_dirname)
except OSError: #already written to
pass
#inspired by: http://code.google.com/p/s3funnel/source/browse/trunk/scripts/s3funnel?r=10
RETRIES = 5 #attempt at most 5 times
wait = 1
for i in xrange(RETRIES):
try:
k.get_contents_to_filename(filename)
break
except (IncompleteRead, socket_error, BotoClientError), e:
if i == RETRIES-1: #failed final attempt
raise Exception('FAILED TO DOWNLOAD %s, %s' % (k, e))
break
wait *= 2
sleep(wait)
#put filename in right spot!
filenames[index] = filename
num_cores = multiprocessing.cpu_count()
q = Queue.Queue(0)
for i, k in enumerate(keys):
q.put({'idx': i, 'key':k})
for i in range(num_cores-1):
q.put(None) # add end-of-queue markers
q.put('DONE') #to signal absolute end of job
#Spin up all the workers
workers = [DownloadThread(q, dir) for i in range(num_cores)]
for worker in workers:
worker.start()
#Block main thread until completion
for worker in workers:
worker.join()
return filenames
答案 0 :(得分:4)
升级到AWS SDK 1.4.4.0或更高版本,或者坚持使用2个主题。较旧的版本具有最多2个同时连接的limit。这意味着如果启动2个线程,您的代码将运行良好;如果您启动3个或更多,您必然会看到不完整的读取和耗尽的超时。
你会看到虽然2个线程可以大大提高你的吞吐量,但是2个以上的变化不会太大,因为你的网卡一直都很忙。
答案 1 :(得分:0)
S3Connection使用httplib.py并且该库不是线程安全的,因此确保每个线程都有自己的连接是至关重要的。看起来你正在这样做。
Boto已经拥有了自己的重试机制,但是你在其上层叠一个来处理某些其他错误。我想知道在except块中创建一个新的S3Connection对象是否可取。看起来底层的http连接在这一点上可能处于异常状态,最好从新连接开始。
只是一个想法。