我尝试使用 lamda 函数将文件写入存储桶内的可写 /tmp/ 文件夹,但出现 AccessDenied 错误。这很奇怪,因为我可以通过在本地调用 lambda 函数来做到这一点。下面是 lambda 函数的代码:
import json
import boto3
import os
def lambda_handler(event, context):
# TODO implement
print(event)
session = boto3.Session(profile_name=os.environ.get("MY_PROFILE", None))
client = session.client("s3")
os.chdir('/tmp')
with open('test.txt', "w") as f:
f.write("testing")
client.upload_file('test.txt', 'my-bucket', 'tmp/test.txt')
这里是错误日志:
{
"errorMessage": "Failed to upload test.txt to my-bucket/tmp/test.txt: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied",
"errorType": "S3UploadFailedError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 14, in lambda_handler\n client.upload_file('test.txt', 'gp-model-bucket', 'tmp/test.txt')\n",
" File \"/var/runtime/boto3/s3/inject.py\", line 131, in upload_file\n extra_args=ExtraArgs, callback=Callback)\n",
" File \"/var/runtime/boto3/s3/transfer.py\", line 287, in upload_file\n filename, '/'.join([bucket, key]), e))\n"
]
}
有人可以帮我吗?
答案 0 :(得分:2)
您应该将 S3 写入权限添加到您的 AWS Lambda execution role 中。您可以将以下 IAM 政策添加到您的角色:
function importAll(r) {
let images = {};
r.keys().map((item, index) => { images[item.replace('./', '')] = r(item); });
return images;
}
const images = importAll(require.context('./pics', false, /\.(png|jpe?g|svg)$/));
可能需要其他权限,例如KMS 权限(如果您的存储桶使用默认 KMS 加密)。