在Terraform支持AWS中的Storage Gateway之前,我通过其他方式创建了三个文件网关。本质上,我使用Terraform来启动受支持的位(iam策略,s3存储桶,ec2实例,高速缓存卷),并使用bash脚本进行cli调用以将其组合在一起。效果很好。
现在Terraform支持文件网关的创建/激活(包括缓存卷的供应),我已经重构了Terraform以消除bash脚本。
网关实例和缓存卷是使用以下Terraform创建的:
resource "aws_instance" "gateway" {
ami = "${var.instance_ami}"
instance_type = "${var.instance_type}"
# Refer to AWS File Gateway documentation for minimum system requirements.
ebs_optimized = true
subnet_id = "${element(data.aws_subnet_ids.subnets.ids, random_integer.priority.result)}"
ebs_block_device {
device_name = "/dev/xvdf"
volume_size = "${var.ebs_cache_volume_size}"
volume_type = "gp2"
delete_on_termination = true
}
key_name = "${var.key_name}"
vpc_security_group_ids = [
"${aws_security_group.storage_gateway.id}",
]
}
实例启动并运行后,bash脚本中的以下片段将查找卷ID,并将该卷配置为网关缓存:
# gets the gateway_arn and uses that to lookup the volume ID
gateway_arn=$(aws storagegateway list-gateways --query "Gateways[*].{arn:GatewayARN,name:GatewayName}" --output text | grep ${gateway_name} | awk '{print $1}')
volume_id=$(aws storagegateway list-local-disks --gateway-arn ${gateway_arn} --query "Disks[*].{id:DiskId}" --output text)
echo "the volume ID is $volume_id"
# add the gateway cache
echo "adding cache to the gateway"
aws storagegateway add-cache --gateway-arn ${gateway_arn} --disk-id ${volume_id}
此过程的最终结果是网关处于联机状态,配置了缓存卷,但是Terraform状态仅知道实例。随后,我将Terraform重构为包含以下内容:
resource "aws_storagegateway_gateway" "nfs_file_gateway" {
gateway_ip_address = "${aws_instance.gateway.private_ip}"
gateway_name = "${var.gateway_name}"
gateway_timezone = "${var.gateway_time_zone}"
gateway_type = "FILE_S3"
}
resource "aws_storagegateway_cache" "nfs_cache_volume" {
disk_id = "${aws_instance.gateway.ebs_block_device.volume_id}"
gateway_arn = "${aws_storagegateway_gateway.nfs_file_gateway.id}"
}
从那里,我运行以下命令来获取缓存量的disk_id
(请注意,我已编辑了帐户ID和网关ID:
aws storagegateway list-local-disks --gateway-arn arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id] --region us-east-1
这将返回:
{
"GatewayARN": "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]",
"Disks": [
{
"DiskId": "xen-vbd-51792",
"DiskPath": "/dev/xvdf",
"DiskNode": "/dev/sdf",
"DiskStatus": "present",
"DiskSizeInBytes": 161061273600,
"DiskAllocationType": "CACHE STORAGE"
}
]
}
然后我在aws_storagegateway_cache
资源上运行Terraform导入命令,以将现有资源拉入状态文件。
我运行的命令:
terraform_11.5 import module.sql_backup_file_gateway.module.storage_gateway.aws_storagegateway_cache.nfs_cache_volume arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]:xen-vbd-51792
导入成功完成。然后,我运行Terraform初始化和Terraform计划,该计划显示如果我要运行应用,将重新创建缓存卷。
计划的输出
-/+ module.sql_backup_file_gateway.module.storage_gateway.aws_storagegateway_cache.nfs_cache_volume (new resource required)
id: "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]:xen-vbd-51792" => <computed> (forces new resource)
disk_id: "xen-vbd-51792" => "1" (forces new resource)
gateway_arn: "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]" => "arn:aws:storagegateway:us-east-1:[account_id]:gateway/[gateway_id]"
我无法在import语句中为disk_id
提供其他值,以允许导入完成。我不确定如果要运行terraform apply
会避免重新创建缓存卷的问题。
答案 0 :(得分:2)
我实际上找到了解决方案。 @ydaetskcoR-您对将volume_id
映射到disk_id
的评论使我找到了我需要的Terraform,以弥补实例声明和缓存声明之间的差距。
这个Terraform块使我能够以一种可以稍后在Terraform中输出正确的ebs_block_device
的方式查找disk_id
:
data "aws_storagegateway_local_disk" "cache" {
disk_path = "/dev/xvdf"
gateway_arn = "${aws_storagegateway_gateway.nfs_file_gateway.arn}"
}
一旦添加了此块,我便将用于配置缓存的Terraform重构为以下内容:
resource "aws_storagegateway_cache" "nfs_cache_volume" {
disk_id = "${data.aws_storagegateway_local_disk.cache.id}"
gateway_arn = "${aws_storagegateway_gateway.nfs_file_gateway.id}"
}
现在,当我运行terraform init
和terraform plan
时,网关卷不会显示为需要任何更改或替换。
感谢您帮助我对此进行跟踪。
-戴夫