我尝试通过亚马逊s3的双重性来恢复单个文件或目录,但是我收到了错误。
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Traceback (most recent call last):
File "/usr/bin/duplicity", line 1251, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1244, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1198, in main
restore(col_stats)
File "/usr/bin/duplicity", line 538, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/bin/duplicity", line 560, in restore_get_patched_rop_iter
backup_chain = col_stats.get_backup_chain_at_time(time)
File "/usr/lib/python2.6/dist-packages/duplicity/collections.py", line 934, in get_backup_chain_at_time
raise CollectionsError("No backup chains found")
CollectionsError: No backup chains found
我做错了什么?
这里我是如何做备份的 导出PASSPHRASE = * *** 导出AWS_ACCESS_KEY_ID = * *** 导出AWS_SECRET_ACCESS_KEY = * *** GPG_KEY = * *** BACKUP_SIM_RUN = 1
LOGFILE="/var/log/s3-backup.log"
DAILYLOGFILE="/var/log/s3-backup-daily.log"
# The source of your backup
SOURCE=/home/u54433
# The destination
DEST=s3+http://**********
trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo "$stamp: $*" >> ${DAILYLOGFILE}
}
cat /dev/null > ${DAILYLOGFILE}
trace "removing old backups..."
duplicity remove-older-than 2M --force --sign-key=${GPG_KEY} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "start backup files..."
duplicity --sign-key=${GPG_KEY} --exclude="**/logs" --s3-european-buckets --s3-use-new-style ${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
cat "$DAILYLOGFILE" >> $LOGFILE
export PASSPHRASE=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
答案 0 :(得分:2)
在所有重复性调用中使用 - -s3-use-new-style 选项。
我遇到了和你一样的问题。我将缺少的选项添加到“duplicity remove-older-than”,现在一切都很好。
答案 1 :(得分:0)
最好从亚马逊中移除S3存储桶并尝试重新创建完整备份,这可能会解决问题。
另外
您可以看到以下链接
答案 2 :(得分:0)
对于回到此问题寻找明确答案的任何人,@ shaikh-systems link会导致认识到IAM子帐户密钥的Duplicity / AWS通信存在一些问题。要恢复,我使用/ export
我的主帐户密钥/秘密让它工作。我正在使用duplicity 0.6.21。