1
我正在尝试编写一个工具,它将压缩目录并将压缩输出流式传输到S3,而无需先在磁盘上进行缓存。读取文件,压缩并将压缩的输出传送到S3
package main
import (
"compress/gzip"
"io"
"log"
"os"
"sync"
"github.com/rlmcpherson/s3gof3r"
)
// log.Fatal() implies os.Exit(1)
func logerror(err error) {
if err != nil {
log.Fatalf("%s\n", err)
}
}
func main() {
k, err := s3gof3r.EnvKeys()
logerror(err)
// Open bucket we want to write a file to
s3 := s3gof3r.New("", k)
bucket := s3.Bucket("somebucket")
// Open file to upload
files, err := os.Open("somefile")
logerror(err)
defer files.Close()
// open a PutWriter for S3 upload
s3writer, err := bucket.PutWriter("somezipfile.gz", nil, nil)
logerror(err)
// Create io pipe for passing gzip output to putwriter input
pipereader, pipewriter := io.Pipe()
defer pipereader.Close()
var wg sync.WaitGroup
wg.Add(2)
// Compress
go func() {
defer wg.Done()
defer pipewriter.Close()
gw := gzip.NewWriter(pipewriter)
defer gw.Close()
_, err := io.Copy(gw, files)
logerror(err)
}()
// Transmit
go func() {
defer wg.Done()
_, err := io.Copy(s3writer, pipereader)
logerror(err)
}()
wg.Wait()
}
当我编译并运行这个,我得到没有错误输出,并在S3没有文件。添加了一堆打印的打动了我下面的输出,如果它是有帮助的:
files: &{0xc4200d0a00}
s3writer: &{{https <nil> somebucket.s3.amazonaws.com /somezipfile.gz false } 0xc4200d0a60 0xc420014540 20971520 [] 0 0xc42010e2a0 0 false <nil> {{} [0 0 0 0 0 0 0 0 0 0 0 0] 0} 0xc42010e300 0xc42010e360 0xc42035a740 0 97wUYO2YZPjLXqOLTma_Y1ASo.0IdeoKkif6pch60s3._J1suo9pUTCFwUj23uT.puzzDEHcV1KJPze.1EnLeoNehhBXeSpsH_.e4gXlNqBZ0HFsvyABJfHNYwUyXASx { []} 0}
pipewriter: &{0xc42013c180}
gzipwriter: &{{ [] 0001-01-01 00:00:00 +0000 UTC 255} 0xc420116020 -1 false <nil> 0 0 false [0 0 0 0 0 0 0 0 0 0] <nil>}
archive: 1283
upload: 606
帮助赞赏!
您可能需要关闭s3writer。另外,除了使用管道和额外的goroutines外,你不能仅仅给's3writer'作为'gzip.NewWriter'的参数吗? –
为什么不使用亚马逊的Go SDK? –
确保您的SDK是最新的。另外请确保您的数据量小于5GB,否则您需要采用多上传方式将数据放到您的存储桶中。 – Sam