亚马逊不通过其API提供此功能。我们遇到了同样的问题,并通过运行将文件重新上传到Glacier的每日cron作业来解决问题。
以下是可以使用Python和boto将文件复制到Glacier库的代码片段。请注意,使用下面的代码,您必须先从S3本地下载文件,然后才能运行它(例如,您可以使用s3cmd) - 以下代码可用于将本地文件上传到Glacier。
import boto
# Set up your AWS key and secret, and vault name
aws_key = "AKIA1234"
aws_secret = "ABC123"
glacierVault = "someName"
# Assumption is that this file has been downloaded from S3
fileName = "localfile.tgz"
try:
# Connect to boto
l = boto.glacier.layer2.Layer2(aws_access_key_id=aws_key, aws_secret_access_key=aws_secret)
# Get your Glacier vault
v = l.get_vault(glacierVault)
# Upload file using concurrent upload (so large files are OK)
archiveID = v.concurrent_create_archive_from_file(fileName)
# Append this archiveID to a local file, that way you remember what file
# in Glacier corresponds to a local file. Glacier has no concept of files.
open("glacier.txt", "a").write(fileName + " " + archiveID + "\n")
except:
print "Could not upload gzipped file to Glacier"
我也想要这个功能。不过,我认为它现在不存在。 – 2013-03-10 21:58:51
通过将S3镜像到Glacier,你想完成什么? – 2013-03-10 22:54:47
@EricHammond我想在Glacier上备份我的S3文件。 – 2013-03-11 11:35:14