我正在使用Carrierwave和3个独立模型将照片上传到S3。我保留了上传器的默认设置,即将照片存储在根S3存储桶中。然后,我决定将它们存储在子目录中,根据模型名称/ avatars,items /等等,根据它们上传的模型...需要更改S3存储桶(Carrierwave/Fog)中文件的存储“目录”
然后,我注意到同名文件正在被覆盖,当我删除模型记录时,照片未被删除。
我,因为从具体的上传者设置改变了store_dir这样的:
def store_dir
"items"
end
到一个通用的一个,其下的模型ID存储的照片(我用蒙戈FYI):
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
问题来了。我正试图将所有已经进入S3的照片转移到S3中正确的“目录”中。从我已经准备好的,S3本身没有目录。我在耙子任务中遇到问题。由于我更改了store_dir,因此Carrierwave正在查找以前上传到错误目录中的所有照片。
namespace :pics do
desc "Fix directory location of pictures on s3"
task :item_update => :environment do
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => 'XXXX',
:aws_secret_access_key => 'XXX'
})
directory = connection.directories.get("myapp-uploads-dev")
Recipe.all.each do |l|
if l.images.count > 0
l.items.each do |i|
if i.picture.path.to_s != ""
new_full_path = i.picture.path.to_s
filename = new_full_path.split('/')[-1].split('?')[0]
thumb_filename = "thumb_#{filename}"
original_file_path = "items/#{filename}"
puts "attempting to retrieve: #{original_file_path}"
original_thumb_file_path = "items/#{thumb_filename}"
photo = directory.files.get(original_file_path) rescue nil
if photo
puts "we found: #{original_file_path}"
photo.expires = 2.years.from_now.httpdate
photo.key = new_full_path
photo.save
thumb_photo = directory.files.get(original_thumb_file_path) rescue nil
if thumb_photo
puts "we found: #{original_thumb_file_path}"
thumb_photo.expires = 2.years.from_now.httpdate
thumb_photo.key = "/uploads/item/picture/#{i.id}/#{thumb_filename}"
thumb_photo.save
end
end
end
end
end
end
end
end
所以我通过循环所有的食谱,寻找与照片的项目,确定老Carrierwave路径,试图用基于该store_dir变化的新的更新它。我想如果我只是用新的路径更新photo.key,它会起作用,但事实并非如此。
我在做什么错?有没有更好的方法来完成这个问题?
这里就是我获得这项工作...
namespace :pics do
desc "Fix directory location of pictures"
task :item_update => :environment do
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => 'XXX',
:aws_secret_access_key => 'XXX'
})
bucket = "myapp-uploads-dev"
puts "Using bucket: #{bucket}"
Recipe.all.each do |l|
if l.images.count > 0
l.items.each do |i|
if i.picture.path.to_s != ""
new_full_path = i.picture.path.to_s
filename = new_full_path.split('/')[-1].split('?')[0]
thumb_filename = "thumb_#{filename}"
original_file_path = "items/#{filename}"
original_thumb_file_path = "items/#{thumb_filename}"
puts "attempting to retrieve: #{original_file_path}"
# copy original item
begin
connection.copy_object(bucket, original_file_path, bucket, new_full_path, 'x-amz-acl' => 'public-read')
puts "we just copied: #{original_file_path}"
rescue
puts "couldn't find: #{original_file_path}"
end
# copy thumb
begin
connection.copy_object(bucket, original_thumb_file_path, bucket, "uploads/item/picture/#{i.id}/#{thumb_filename}", 'x-amz-acl' => 'public-read')
puts "we just copied: #{original_thumb_file_path}"
rescue
puts "couldn't find thumb: #{original_thumb_file_path}"
end
end
end
end
end
end
end
也许不是世界上最漂亮的事情,但它的工作。
我会有种期待,以正常工作。是否存在错误或文件不存在于您期望的位置?如果数量很少,这可能会正常工作,但特别是对于较大的数字,您可能希望使用copy_object作为下面的jeremy提及(因为它可以更快速地完成任何操作,而且无需下载任何内容)。 – geemus
谢谢你,这对我经历同样的事情很有帮助。有一件事你可能想知道在将来你可以直接得到文件名,而不必解析路径: i.picture.file.filename 会在你的情况下做到这一点。 –