如何使用Salesforce Apex将更大的文件(大于12 MB)上传到AWS S3存储桶

时间:2019-11-26 21:01:30

标签: amazon-web-services amazon-s3 file-upload salesforce

我需要一些帮助,以便从salesforce apex服务器端将大文件上传到s3存储桶中。

我需要能够使用Http PUT操作拆分blob并将其上传到aws s3存储桶。我可以一次上传最多12 MB的文件,因为这是Apex中的PUT请求正文大小限制。 因此,我需要能够使用分段操作进行上传。我注意到s3允许部分上传并返回一个uploadId。想知道是否有人以前在salesforce顶点代码中已经做到这一点。将不胜感激。

预先感谢 帕尔巴蒂·玻色(Parbati Bose)。

这是代码

public with sharing class AWSS3Service {

    private static Http http;

    @auraEnabled
    public static  void uploadToAWSS3( String fileToUpload , String filenm , String doctype){


        fileToUpload = EncodingUtil.urlDecode(fileToUpload, 'UTF-8');

        filenm = EncodingUtil.urlEncode(filenm , 'UTF-8'); // encode the filename in case there are special characters in the name 
        String filename = 'Storage' + '/' + filenm ;
        String attachmentBody = fileToUpload;

        String formattedDateString = DateTime.now().formatGMT('EEE, dd MMM yyyy HH:mm:ss z');




        // s3 bucket!
        String key = '**********' ;
        String secret = '********' ;
        String bucketname = 'testbucket' ;
        String region = 's3-us-west-2' ;



        String host = region + '.' + 'amazonaws.com' ; //aws server base url




    try{
        HttpRequest req = new HttpRequest();
        http = new Http() ;
        req.setMethod('PUT');
        req.setEndpoint('https://' + bucketname + '.' + host +  '/' +  filename );
        req.setHeader('Host', bucketname + '.' + host);

        req.setHeader('Content-Encoding', 'UTF-8');
        req.setHeader('Content-Type' , doctype);
        req.setHeader('Connection', 'keep-alive');
        req.setHeader('Date', formattedDateString);
        req.setHeader('ACL', 'public-read-write');


        String stringToSign = 'PUT\n\n' +
                doctype + '\n' +
                formattedDateString + '\n' +
                '/' + bucketname +  '/' + filename;


        Blob mac = Crypto.generateMac('HMACSHA1', blob.valueof(stringToSign),blob.valueof(secret));
        String signed = EncodingUtil.base64Encode(mac);
        String authHeader = 'AWS' + ' ' + key + ':' + signed;
        req.setHeader('Authorization',authHeader);

        req.setBodyAsBlob(EncodingUtil.base64Decode(fileToUpload)) ;



        HttpResponse response = http.send(req);

        Log.debug('response from aws s3 is ' + response.getStatusCode() + ' and ' + response.getBody());


    }catch(Exception e){
            Log.debug('error in connecting to s3 ' + e.getMessage());
            throw e ;
        }
    }

2 个答案:

答案 0 :(得分:1)

在过去的几天里,我一直在研究相同的问题,很遗憾,由于APEX堆大小限制为12MB,因此您将可以更好地从Salesforce外部执行此转移。 https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_gov_limits.htm

虽然可以使用多部分写入文件,但似乎无法将它们从数据库中取出来拆分成可以发送的块。在stackexchange上提出了类似的问题- https://salesforce.stackexchange.com/questions/264015/how-to-retrieve-file-content-from-content-document-in-chunks-using-soql

答案 1 :(得分:0)

  

适用于Java的AWS开发工具包公开了一个称为TransferManager的高级API,   简化了分段上传(请参阅使用   分段上传API)。您可以从文件或流中上传数据。   您还可以设置高级选项,例如您想要的零件尺寸   用于分段上传,或您的并发线程数   要在上传零件时使用。您还可以设置可选对象   属性,存储类或ACL。您使用   要设置的PutObjectRequest和TransferManagerConfiguration类   这些高级选项。

这是https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileJava.html中的示例代码。

您可以适应您的Salesforce Apex代码:

import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;

import java.io.File;

public class HighLevelMultipartUpload {

    public static void main(String[] args) throws Exception {
        Regions clientRegion = Regions.DEFAULT_REGION;
        String bucketName = "*** Bucket name ***";
        String keyName = "*** Object key ***";
        String filePath = "*** Path for file to upload ***";

        try {
            AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
                    .withRegion(clientRegion)
                    .withCredentials(new ProfileCredentialsProvider())
                    .build();
            TransferManager tm = TransferManagerBuilder.standard()
                    .withS3Client(s3Client)
                    .build();

            // TransferManager processes all transfers asynchronously,
            // so this call returns immediately.
            Upload upload = tm.upload(bucketName, keyName, new File(filePath));
            System.out.println("Object upload started");

            // Optionally, wait for the upload to finish before continuing.
            upload.waitForCompletion();
            System.out.println("Object upload complete");
        } catch (AmazonServiceException e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process 
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}