如何配置“ Terraform”以将zip文件上传到“ s3”存储桶,然后将其部署到lambda

时间:2019-07-22 11:23:56

标签: amazon-s3 aws-lambda terraform

我在应用程序中使用TerraForm作为基础架构框架。以下是我用于将python代码部署到lambda的配置。它执行三个步骤:1.将所有依赖项和源代码压缩到一个zip文件中; 2.将压缩文件上传到s3存储桶; 3.部署到lambda函数。

但是发生的是部署命令terraform apply将失败,并显示以下错误:

Error: Error modifying Lambda Function Code quote-crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
    status code: 400, request id: 2db6cb29-8988-474c-8166-f4332d7309de

  on config.tf line 48, in resource "aws_lambda_function" "test_lambda":
  48: resource "aws_lambda_function" "test_lambda" {



Error: Error modifying Lambda Function Code praw_crawler: InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.
    status code: 400, request id: e01c83cf-40ee-4919-b322-fab84f87d594

  on config.tf line 67, in resource "aws_lambda_function" "praw_crawler":
  67: resource "aws_lambda_function" "praw_crawler" {

这意味着s3存储桶中不存在部署文件。但是第二次运行命令时它成功。似乎是时间问题。将zip文件上传到s3存储桶后,该zip文件在s3存储桶中不存在。这就是为什么第一次部署失败。但是几秒钟后,第二个命令成功且非常快速地完成。我的配置文件有问题吗?

可以找到完整的terraform配置文件:https://github.com/zhaoyi0113/quote-datalake/blob/master/config.tf

3 个答案:

答案 0 :(得分:2)

您需要正确添加依赖项才能实现此目标,否则,它将崩溃。

首先压缩文件

# Zip the Lamda function on the fly
data "archive_file" "source" {
  type        = "zip"
  source_dir  = "../lambda-functions/loadbalancer-to-es"
  output_path = "../lambda-functions/loadbalancer-to-es.zip"
}

然后通过指定zip的依赖性来上传s3,source = "${data.archive_file.source.output_path}"将使其依赖于zip

# upload zip to s3 and then update lamda function from s3
resource "aws_s3_bucket_object" "file_upload" {
  bucket = "${aws_s3_bucket.bucket.id}"
  key    = "lambda-functions/loadbalancer-to-es.zip"
  source = "${data.archive_file.source.output_path}" # its mean it depended on zip
}

那么您就可以很好地部署Lambda了,要使它行得通,可以使用神奇的s3_key = "${aws_s3_bucket_object.file_upload.key}"

  resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
  function_name = "alb-logs-to-elk"
  description   = "elb-logs-to-elasticsearch"
  s3_bucket   = "${var.env_prefix_name}${var.s3_suffix}"
  s3_key      = "${aws_s3_bucket_object.file_upload.key}" # its mean its depended on upload key
  memory_size = 1024
  timeout     = 900
  timeouts {
  create = "30m"
  }
  runtime          = "nodejs8.10"
  role             = "${aws_iam_role.role.arn}"
  source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
  handler          = "index.handler"

}

答案 1 :(得分:0)

使用Terraform的source_code_hash时,即使代码没有更改,您也可能会发现archive_file有所更改。如果您遇到问题,我创建了一个模块来解决此问题:lambda-python-archive

答案 2 :(得分:0)

这是对最高答案的回应:

您需要将.output_base64sha256添加到source_code_hash中,而不要使用base64sha256,否则terraform计划永远不会出现“无更改/最新”消息。

例如:

  source_code_hash = "${data.archive_file.source.output_bash64sha256}"