从浏览器客户端上载PDF文件,而不会暴露任何凭据或任何不愉快的内容。基于this,我认为可以做到,但似乎对我不起作用。
前提是:
您基于提供给JavaScript AWS SDK一部分的函数的一组参数,从S3存储桶中请求一个预签名的URL
您将此URL提供给前端,后者可以使用它在S3存储桶中放置文件,而无需在前端使用任何凭据或身份验证。
这部分很简单,对我有用。我只是用这个小JS块从S3请求一个URL:
const s3Params = {
Bucket: uploadBucket,
Key: `${fileId}.pdf`,
ContentType: 'application/pdf',
Expires: 60,
ACL: 'public-read',
}
let uploadUrl = s3.getSignedUrl('putObject', s3Params);
这是无效的部分,我不知道为什么。 这小段代码基本上是使用PUT请求将数据块发送到S3存储桶预签名的URL。
const result = await fetch(response.data.uploadURL, {
method: 'put',
body: blobData,
});
我发现使用任何POST请求都会导致400 Bad Request
,因此将其放置。
Content-Type(在我的情况下是application / pdf,所以blobData.type
)-它们在后端和前端之间匹配。
Similar use case。来看一看,似乎在PUT请求中不需要提供标头,并且签名的URL本身就是文件上传所必需的。
Something weird,我听不懂。看来我可能需要将文件的长度和类型传递给对S3的getSignedUrl
调用。
将我的存储桶公开(不存在)
...
uploadFile: async function(e) {
/* receives file from simple input element -> this.file */
// get signed URL
const response = await axios({
method: 'get',
url: API_GATEWAY_URL
});
console.log('upload file response:', response);
let binary = atob(this.file.split(',')[1]);
let array = [];
for (let i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
let blobData = new Blob([new Uint8Array(array)], {type: 'application/pdf'});
console.log('uploading to:', response.data.uploadURL);
console.log('blob type sanity check:', blobData.type);
const result = await fetch(response.data.uploadURL, {
method: 'put',
headers: {
'Access-Control-Allow-Methods': '*',
'Access-Control-Allow-Origin': '*',
'x-amz-acl': 'public-read',
'Content-Type': blobData.type
},
body: blobData,
});
console.log('PUT result:', result);
this.uploadUrl = response.data.uploadURL.split('?')[0];
}
'use strict';
const uuidv4 = require('uuid/v4');
const aws = require('aws-sdk');
const s3 = new aws.S3();
const uploadBucket = 'the-chumiest-bucket';
const fileKeyPrefix = 'path/to/where/the/file/should/live/';
const getUploadUrl = async () => {
const fileId = uuidv4();
const s3Params = {
Bucket: uploadBucket,
Key: `${fileId}.pdf`,
ContentType: 'application/pdf',
Expires: 60,
ACL: 'public-read',
}
return new Promise((resolve, reject) => {
let uploadUrl = s3.getSignedUrl('putObject', s3Params);
resolve({
'statusCode': 200,
'isBase64Encoded': false,
'headers': {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': '*',
'Access-Control-Allow-Credentials': true,
},
'body': JSON.stringify({
'uploadURL': uploadUrl,
'filename': `${fileId}.pdf`
})
});
});
};
exports.handler = async (event, context) => {
console.log('event:', event);
const result = await getUploadUrl();
console.log('result:', result);
return result;
}
service: ocr-space-service
provider:
name: aws
region: ca-central-1
stage: ${opt:stage, 'dev'}
timeout: 20
plugins:
- serverless-plugin-existing-s3
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-plugin-include-dependencies
layers:
spaceOcrLayer:
package:
artifact: spaceOcrLayer.zip
allowedAccounts:
- "*"
functions:
fileReceiver:
handler: src/node/fileReceiver.handler
events:
- http:
path: /doc-parser/get-url
method: get
cors: true
startStateMachine:
handler: src/start_state_machine.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
events:
- existingS3:
bucket: ingenio-documents
events:
- s3:ObjectCreated:*
rules:
- prefix:
- suffix: .pdf
startOcrSpaceProcess:
handler: src/start_ocr_space.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
parseOcrSpaceOutput:
handler: src/parse_ocr_space_output.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
renamePdf:
handler: src/rename_pdf.lambda_handler
role:
runtime: python3.7
layers:
- {Ref: SpaceOcrLayerLambdaLayer}
parseCorpSearchOutput:
handler: src/node/pdfParser.handler
role:
runtime: nodejs10.x
saveFileToProcessed:
handler: src/node/saveFileToProcessed.handler
role:
runtime: nodejs10.x
stepFunctions:
stateMachines:
ocrSpaceStepFunc:
name: ocrSpaceStepFunc
definition:
StartAt: StartOcrSpaceProcess
States:
StartOcrSpaceProcess:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-startOcrSpaceProcess"
Next: IsDocCorpSearchChoice
Catch:
- ErrorEquals: ["HandledError"]
Next: HandledErrorFallback
IsDocCorpSearchChoice:
Type: Choice
Choices:
- Variable: $.docIsCorpSearch
NumericEquals: 1
Next: ParseCorpSearchOutput
- Variable: $.docIsCorpSearch
NumericEquals: 0
Next: ParseOcrSpaceOutput
ParseCorpSearchOutput:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-parseCorpSearchOutput"
Next: SaveFileToProcessed
Catch:
- ErrorEquals: ["SqsMessageError"]
Next: CorpSearchSqsErrorFallback
- ErrorEquals: ["DownloadFileError"]
Next: CorpSearchDownloadFileErrorFallback
- ErrorEquals: ["HandledError"]
Next: HandledNodeErrorFallback
SaveFileToProcessed:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-saveFileToProcessed"
End: true
ParseOcrSpaceOutput:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-parseOcrSpaceOutput"
Next: RenamePdf
Catch:
- ErrorEquals: ["HandledError"]
Next: HandledErrorFallback
RenamePdf:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-renamePdf"
End: true
Catch:
- ErrorEquals: ["HandledError"]
Next: HandledErrorFallback
- ErrorEquals: ["AccessDeniedException"]
Next: AccessDeniedFallback
AccessDeniedFallback:
Type: Fail
Cause: "Access was denied for copying an S3 object"
HandledErrorFallback:
Type: Fail
Cause: "HandledError occurred"
CorpSearchSqsErrorFallback:
Type: Fail
Cause: "SQS Message send action resulted in error"
CorpSearchDownloadFileErrorFallback:
Type: Fail
Cause: "Downloading file from S3 resulted in error"
HandledNodeErrorFallback:
Type: Fail
Cause: "HandledError occurred"
403禁止
响应{type:“ cors”,网址:“ https:// {bucket-name} .s3。{region-id} .amazonaw ... nedHeaders = host%3Bx-amz-acl&x-amz-acl = public-读取”,重定向:否,状态:403,确定:否,...} 身体: (...) bodyUsed:错误 标头:标头{} 好的:错误 重定向:false 状态:403 statusText:“禁止” 类型:“ cors” 网址:“ https:// {bucket-name} .s3。{region-id} .amazonaws.com / actionID.pdf?Content-Type = application%2Fpdf&X-Amz-Algorithm = SHA256&X-Amz-Credential = CREDZ-&X -Amz-Date = 20190621T192558Z&X-Amz-Expires = 900&X-Amz-Security-Token = {token}&X-Amz-SignedHeaders = host%3Bx-amz-acl&x-amz-acl = public-read“ 原始:回复
我认为使用AWS S3 SDK提供给getSignedUrl
调用的参数不正确,尽管它们遵循AWS文档建议的结构(解释为here)。除此之外,我真的迷失了为何拒绝我的请求。我什至尝试将我的存储桶完全向公众公开,但仍然无效。
在阅读this之后,我试图像这样构造我的PUT请求:
let authFromGet = response.config.headers.Authorization;
const putHeaders = {
'Authorization': authFromGet,
'Content-Type': blobData,
'Expect': '100-continue',
};
...
const result = await fetch(response.data.uploadURL, {
method: 'put',
headers: putHeaders,
body: blobData,
});
这导致400 Bad Request
而不是403;不同,但仍然是错误的。显然,在请求上放置任何标头都是错误的。
答案 0 :(得分:0)
深入研究这是因为您试图将具有公共ACL的对象上载到不允许公共对象的存储桶中。
(可选)删除公共ACL语句或...
确保将存储桶设置为任意一个
基本上,您无法将带有公共ACL的对象上载到存储桶中,在存储桶中有一些限制可以阻止这样做-您将得到描述的403错误。 HTH。