在Golang中使用UploadPartCopy到MultiPartUpload时遇到AccessDenied

时间:2018-06-08 20:38:53

标签: amazon-web-services go amazon-s3

我正在尝试使用S3 MultipartUpload来连接S3存储桶中的文件。如果您有多个文件> 5MB(最后一个文件可以更小),您可以将它们在S3中连接成一个更大的文件。它基本上相当于使用cat将文件合并在一起。当我尝试在Go中执行此操作时,我得到:

An error occurred (AccessDenied) when calling the UploadPartCopy operation: Access Denied

代码看起来像这样:

mpuOut, err := s3CreateMultipartUpload(&S3.CreateMultipartUploadInput{
    Bucket: aws.String(bucket),
    Key:    aws.String(concatenatedFile),
})
if err != nil {
    return err
}

var ps []*S3.CompletedPart
for i, part := range parts { // parts is a list of paths to things in s3
    partNumber := int64(i) + 1
    upOut, err := s3UploadPartCopy(&S3.UploadPartCopyInput{
        Bucket:     aws.String(bucket),
        CopySource: aws.String(part),
        Key:        aws.String(concatenatedFile),
        UploadId:   aws.String(*mpuOut.UploadId),
        PartNumber: aws.Int64(partNumber),
    })
    if err != nil {
        return err // <- fails here
    }
    ps = append(ps, &S3.CompletedPart{
        ETag:       s3Out.CopyPartResult.ETag,
        PartNumber: aws.Int64(int64(i)),
    })
}

_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
    Bucket:          aws.String(bucket),
    Key:             aws.String(concatenatedFile),
    MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
    UploadId:        aws.String(*mpuOut.UploadId),
})
if err != nil {
    return err
}

_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
    Bucket:          aws.String(bucket),
    Key:             aws.String(concatenatedFile),
    MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
    UploadId:        aws.String(*mpuOut.UploadId),
})
if err != nil {
    return err
}

当它运行时,会因上述错误而爆炸。存储桶的权限是敞开的。有什么想法吗?

1 个答案:

答案 0 :(得分:1)

好的,问题是当你在做一个UploadPartCopy时,对于CopySource参数,你不要只使用s3存储桶中的路径。您必须将buckname放在路径的前面,即使它位于同一个存储桶中。 DERP

mpuOut, err := s3CreateMultipartUpload(&S3.CreateMultipartUploadInput{
    Bucket: aws.String(bucket),
    Key:    aws.String(concatenatedFile),
})
if err != nil {
    return err
}

var ps []*S3.CompletedPart
for i, part := range parts { // parts is a list of paths to things in s3
    partNumber := int64(i) + 1
    upOut, err := s3UploadPartCopy(&S3.UploadPartCopyInput{
        Bucket:     aws.String(bucket),
        CopySource: aws.String(fmt.Sprintf("%s/%s", bucket, part), // <- ugh
        Key:        aws.String(concatenatedFile),
        UploadId:   aws.String(*mpuOut.UploadId),
        PartNumber: aws.Int64(partNumber),
    })
    if err != nil {
        return err
    }
    ps = append(ps, &S3.CompletedPart{
        ETag:       s3Out.CopyPartResult.ETag,
        PartNumber: aws.Int64(int64(i)),
    })
}

_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
    Bucket:          aws.String(bucket),
    Key:             aws.String(concatenatedFile),
    MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
    UploadId:        aws.String(*mpuOut.UploadId),
})
if err != nil {
    return err
}

_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
    Bucket:          aws.String(bucket),
    Key:             aws.String(concatenatedFile),
    MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
    UploadId:        aws.String(*mpuOut.UploadId),
})
if err != nil {
    return err
}

这只是浪费了我生命中的一个小时,所以我想我会尽力为别人省事。