我正在使用Terraform
将lambda发布到AWS。当我部署到AWS但却在针对localstack
运行时停留在“刷新状态...”上时,它工作正常。
下面是我的.tf
配置文件,您可以看到我将lambda端点配置为http://localhost:4567
。
provider "aws" {
profile = "default"
region = "ap-southeast-2"
endpoints {
lambda = "http://localhost:4567"
}
}
variable "runtime" {
default = "python3.6"
}
data "archive_file" "zipit" {
type = "zip"
source_dir = "crawler/dist"
output_path = "crawler/dist/deploy.zip"
}
resource "aws_lambda_function" "test_lambda" {
filename = "crawler/dist/deploy.zip"
function_name = "quote-crawler"
role = "arn:aws:iam::773592622512:role/LambdaRole"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
}
以下是localstack
的docker撰写文件:
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
- '8055:8080'
environment:
- SERVICES=${SERVICES-lambda }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker-reuse }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
有人知道如何解决此问题吗?
答案 0 :(得分:2)
这就是我修复类似问题的方式:
export TF_LOG=TRACE
,这是最详细的日志记录。terraform plan ....
dag/walk: vertex "module.kubernetes_apps.provider.helmfile (close)" is waiting for "module.kubernetes_apps.helmfile_release_set.metrics_server"
从日志中,我识别出导致该问题的状态:module.kubernetes_apps.helmfile_release_set.metrics_server
。
我删除了它的状态:
terraform state rm module.kubernetes_apps.helmfile_release_set.metrics_server
terraform plan
应该可以解决此问题。这不是最好的解决方案,that's why I contacted the owner of this provider to fix the issue without this workaround。
答案 1 :(得分:0)
我之所以失败,是因为terraform
试图针对AWS检查凭证。在您的.tf配置文件中的以下两行中添加即可解决此问题。
skip_credentials_validation = true
skip_metadata_api_check = true