安全的多部分使用aws js sdk直接从浏览器上传大文件(> 10GB)到s3而不使用cognito

时间:2016-05-12 07:23:41

标签: javascript amazon-web-services amazon-s3 boto multipart

目标:使用aws js sdk并以安全的方式直接从浏览器到s3对大文件(> 10GB)进行分段上传。

已实现:能够成功将大约15GB的文件从浏览器上传到s3存储桶。

问题:不使用亚马逊认知和硬编码密钥和秘密做同样的事。

从我们阅读的表格文档以及支持团队回复说我可以用STS实现这一点,但我确实尝试了检查这个:

from boto.s3.connection import S3Connection
from boto.sts import STSConnection
sts = STSConnection('our_key', 'our_secret')
user = sts.get_federation_token('guest_user_1')
user= sts.assume_role(role_arn='arn:aws:iam::008557872112:role/Cognito_testAuth_Role', role_session_name='Cognito_testAuth_Role_temp')
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/home/aameer/.virtualenvs/indeedev/lib/python3.4/site-packages/boto/sts/connection.py", line 384, in assume_role
    return self.get_object('AssumeRole', params, AssumedRole, verb='POST')
  File "/home/aameer/.virtualenvs/indeedev/lib/python3.4/site-packages/boto/connection.py", line 1208, in get_object
    raise self.ResponseError(response.status, response.reason, body)
boto.exception.BotoServerError: BotoServerError: 403 Forbidden
<ErrorResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
  <Error>
    <Type>Sender</Type>
    <Code>AccessDenied</Code>
    <Message>Roles may not be assumed by root accounts.</Message>
  </Error>
  <RequestId>95e7efc9-12b3-11e6-b1e2-ffa0432dfc4b</RequestId>
</ErrorResponse>

我们已经读过根帐户可能不会承担这些角色,但我们无法弄清楚如何创建这些角色。 我们正在寻找的是一个可以解释以下全部过程的例子: a)创建一个在上述情况下使用的非root用户,即如何创建临时角色(无论是在boto的帮助下在后端还是在aws js sdk的前端) b)然后在前端使用它来实现我们的目标。

由于文档对此非常困惑。

请注意:我们使用django 1.7和python3.4与boto == 1.3.1和aws js sdk 2.2.13实现此目的。

我也在分享js代码

//the uploads(from browser to s3) had issues with large videos(videos greater than 5GB) , the earlier implementation gist is as under:

var xhr = new XMLHttpRequest();

sign = data['signature'] //from backend
policy_json = data['policy'] //from backend

console.log('values from backend')
console.log(sign,policy_json)
var fd = new FormData();
fd.append('key', key);
fd.append('acl', 'public-read'); 
fd.append('AWSAccessKeyId', 'our_key');
fd.append('policy', policy_json);
fd.append('signature',sign);
fd.append("file",file);
xhr.upload.addEventListener("progress", $scope.uploadProgress, false);
xhr.addEventListener("load", $scope.uploadComplete, false);
xhr.addEventListener("error", uploadFailed, false);
xhr.addEventListener("abort", uploadCanceled, false);
if (window.location.href.indexOf('8000') > -1 || window.location.href.indexOf('preproduction') > -1){
   xhr.open('POST', 'https://bucket1.s3.amazonaws.com/', true); //MUST BE LAST LINE BEFORE YOU SEND 
}else{
   xhr.open('POST', 'https://bucket2.s3.amazonaws.com/', true); //MUST BE LAST LINE BEFORE YOU SEND 
}
xhr.setRequestHeader("enctype", "multipart/form-data");
//xhr.setRequestHeader("Content-Type", "undefined");
xhr.send(fd);
if (window.location.href.indexOf('8000') > -1 || window.location.href.indexOf('preproduction') > -1){
   s3URL='https://bucket1.s3.amazonaws.com/'+projectid+'/video/'+projectid+'_'+$scope.video_id+'.'+file_extension;
}else{
   s3URL='https://bucket2.s3.amazonaws.com/'+projectid+'/video/'+projectid+'_'+$scope.video_id+'.'+file_extension;
}


// to make larger files (theoritally upto 5TB) we used aws's js sdk to enable the multipart upload and new implementation which is working
//(for now whatever we threw at it(about 15GB) it uploaded it successfully so the code has improved the robustness , time and size of uploads


//But the thing is we dont want the aws key and secret to be on the front end as mentioned in the following snippet:

AWS.config.update({accessKeyId: 'our_key', secretAccessKey: 'our-secret'});

//based on the server we use the bucket
if (window.location.href.indexOf('8000') > -1 || window.location.href.indexOf('preproduction') > -1){
   AWS.config.region = 'ap-southeast-1';
   bucket_name = 'bucket1'
}else{
   AWS.config.region = 'us-west-1';
   bucket_name = 'bucket2'
}

// Upload the File
var bucket = new AWS.S3({params: {Bucket: bucket_name}});
var params = {Key: key,
            Body: file,
            acl:'public-read'}
var upload_carrier = bucket.upload(params)

upload_carrier.on('httpUploadProgress', function(evt) {
   console.log(evt.loaded *100/evt.total)
   var percentComplete = Math.round(evt.loaded * 100 / evt.total);
   document.getElementById('progressNumber').innerHTML = percentComplete.toString() + '% Complete';
   $(".progress-bar").width(percentComplete.toString() + '%');
})

upload_carrier.send(function(err, data) {
  s3URL='https://bucket1.s3.amazonaws.com/'+projectid+'/video/'+projectid+'_'+$scope.video_id+'.'+file_extension;
  toastr.success("Video was uploaded successfully");
  _kmq.push(['record', 'Upload Finish', {'Upload Type':'Browser'}]);
  $scope.addVideo(); // adds a video on our backend
  console.log(err, data)
})

$scope.cancelUpload= function(evt) {
    toastr.error("The upload has been canceled by the user or the browser dropped the connection.");
    $scope.uploadchange=false;
    $scope.uploadbtn=true;
    $scope.isSaving=false;
    location.reload()
}

//from the docs it appears that we can use amazon incognito where we can use one of the authentication providers like fb, google amazon etc 
//to authenticate users but we don't want another level of authentication, we even tried to use amazon identity pool but it didn't work for some reason 
//and moreover it still doesn't solve the issue of not having keys and secret on front-end as we have to use something like this if we are not wrong

Set the region where your identity pool exists (us-east-1, eu-west-1)
AWS.config.region = 'us-east-1';
// Configure the credentials provider to use your identity pool
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
    IdentityPoolId: 'identity-pool-id',
});

// Make the call to obtain credentials
AWS.config.credentials.get(function(){
    // Credentials will be available when this function is called.
    var accessKeyId = AWS.config.credentials.accessKeyId;
    var secretAccessKey = AWS.config.credentials.secretAccessKey;
    var sessionToken = AWS.config.credentials.sessionToken;
});

//we are trying to avoid custom authentication in cognito as it feels too much of work and again cant see clear examples.
// we also know that we can use the js SDK to assume a role temporarily that would allow uploading to S3
//This would mean that we do not have to use Cognito and also do not need to hard code keys. 
//These credentials by default are live for an hour. and if they only allow access to upload to S3 there is very little risk to our other resources.

//这就是上面用boto尝试但仍然没有运气的东西 我们一直坚持这个问题太长时间了,我非常感谢一些可能有用的代码片段

我们咨询过的一些链接: https://docs.aws.amazon.com/cognito/latest/developerguide/developer-authenticated-identities.html \ How to give permission to a federated user in boto to an s3 bucket? \ https://www.whitneyindustries.com/aws/2014/11/16/boto-plus-s3-plus-sts-tokens.html \ http://boto.cloudhackers.com/en/latest/ref/sts.html \ http://boto.cloudhackers.com/en/latest/ref/sts.html#id7 \ http://boto.cloudhackers.com/en/latest/ref/sts.html#id19 \ http://boto.cloudhackers.com/en/latest/ref/sts.html \ http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html \ http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html \

感谢,

1 个答案:

答案 0 :(得分:2)

 <Message>Roles may not be assumed by root accounts.</Message>

您不应该使用任何的root帐户凭据。您需要使用IAM用户。现在,您正在使用根凭据的所有地方,需要使用IAM用户的凭据替换它们。

请参阅https://aws.amazon.com/iam/details/manage-users/

另见Eric Hammond的Throw Away the Password to your AWS Account