我使用Duplicity将备份从本地服务器运行到Amazon S3。这已经工作了一年多了。三天前,我开始收到以下错误:
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/duplicity/backends/_boto_multi.py", line 204, in _upload
num_cb=max(2, 8 * bytes / (1024 * 1024))
File "/usr/lib/python2.7/site-packages/boto/s3/multipart.py", line 260, in upload_part_from_file
query_args=query_args, size=size)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1291, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 748, in send_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 949, in _send_file_internal
query_args=query_args
File "/usr/lib/python2.7/site-packages/boto/s3/connection.py", line 664, in make_request
retry_handler=retry_handler
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1068, in make_request
retry_handler=retry_handler)
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 939, in _mexe
request.body, request.headers)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 842, in sender
http_conn.send(chunk)
File "/usr/lib64/python2.7/httplib.py", line 805, in send
self.sock.sendall(data)
File "/usr/lib64/python2.7/ssl.py", line 229, in sendall
v = self.send(data[count:])
File "/usr/lib64/python2.7/ssl.py", line 198, in send
v = self._sslobj.write(data)
error: [Errno 104] Connection reset by peer
即使在我尝试以下操作后,这些仍然存在:
- 添加" s3-use-multiprocessing"到我的脚本文件
- 将以下两行添加到/etc/sysctl.conf:
net.ipv4.tcp_wmem = 4096 16384 512000
net.ipv4.tcp_rmem = 4096 87380 512000
- 运行sysctl -p以开始使用上述内容。
三天前,我开始在其他几台服务器上运行Duplicity,在同一帐户上备份到另一个存储桶。那是当这台服务器开始报告连接重置错误时。其他服务器工作正常,所有这些服务器都使用相同版本的Duplicity和Python。它们位于不同子网的不同位置,但这不应该有所作为。
问题服务器上的原始块大小为25MB。其他人的这个数字是250MB。我还能找到什么?我猜猜亚马逊正在重置连接,但为什么要挑出这个服务器?