为什么目标组附件不能像卷附件一样工作

时间:2019-02-18 16:34:46

标签: amazon-web-services terraform

在ec2_instance,aws_volume_attachment和aws_lb_target_group_attachment上使用count(3)时,我可以终止单个ec2_instance,并在下一个terraform上应用,它将仅创建单个丢失的实例,即单个丢失的卷附件,但是它将尝试删除并重新创建所有3个目标组附件。这将导致目标组中的所有3个实例短暂处于不正常状态,这意味着即使其中2个处于健康状态,而一个不健康,这3个实例都已发送请求。

 @Override
 protected void onCreate(Bundle savedInstanceState) {
     super.onCreate(savedInstanceState);
     setContentView(R.layout.activity_achievements);
     CheckBox twentyCheck = findViewById(R.id.twentyCheck);
     SharedPreferences settings = getSharedPreferences("GAME DATA", Context.MODE_PRIVATE);
     int highScore = settings.getInt("HIGH_SCORE", 0);
     if (highScore >= 20) {
         twentyCheck.setChecked(true);
     }
 }

因此,如果我终止实例1,我希望下一个地形可用于重新创建zookeeper [1],kafka_att [1]和schemaRegistryTgAttach [1]。

以上代码创建了instance [1]和volume_attachment [1],但没有target_group_attachments。如果我从target_group_attachments中删除了生命周期块,则它将删除并重新创建所有3个块?如何更改它,以便在创建单个ec2实例时仅创建单个目标组附件?

如果我尝试使用与卷附件相同的方法...

resource "aws_volume_attachment" "kafka_att" {
  count = "${var.zookeeper-max}"
  device_name = "/dev/sdh"
  volume_id = 
"${element(aws_ebs_volume.kafkaVolumes.*.id,count.index)}"
  instance_id = 
"${element(aws_instance.zookeeper.*.id,count.index)}"
  depends_on = 
["aws_instance.zookeeper","aws_ebs_volume.kafkaVolumes"]
  lifecycle {
    ignore_changes = ["aws_instance.zookeeper"]
  }
}

resource "aws_lb_target_group_attachment" "schemaRegistryTgAttach" {
  count = "${var.zookeeper-max}"
  target_group_arn = 
"${aws_alb_target_group.KafkaSchemaRegistryTG.arn}"
  target_id = "${element(aws_instance.zookeeper.*.id,count.index)}"
  depends_on  = ["aws_instance.zookeeper"]
  lifecycle {
    ignore_changes = ["aws_instance.zookeeper"]
  }
}

resource "aws_instance" "zookeeper" {
    count = "${var.zookeeper-max}"
    ...
    blah blah
}

} 那么它不会创建TG附件,但会创建正确的卷附件...计划输出为:-

resource "aws_lb_target_group_attachment" "schemaRegistryTgAttach" {
count = "${var.zookeeper-max}"
target_group_arn = 
"${aws_alb_target_group.KafkaSchemaRegistryTG.arn}"
target_id = "${element(aws_instance.zookeeper.*.id,count.index)}"
depends_on  = ["aws_instance.zookeeper"]
lifecycle {
 ignore_changes = ["aws_instance.zookeeper"]
}

但是,如果我删除了目标组附件上的生命周期块,它将尝试销毁并重新创建所有3个目标组附件。

+ aws_instance.zookeeper[0]
id:                                        <computed>
ami:                                       "ami-09693313102a30b2c"
arn:                                       <computed>
associate_public_ip_address:               "false"
availability_zone:                         <computed>
cpu_core_count:                            <computed>
cpu_threads_per_core:                      <computed>
credit_specification.#:                    "1"
credit_specification.0.cpu_credits:        "unlimited"
disable_api_termination:                   "true"
ebs_block_device.#:                        <computed>
ebs_optimized:                             "false"
ephemeral_block_device.#:                  <computed>
get_password_data:                         "false"
host_id:                                   <computed>
iam_instance_profile:                      "devl-ZOOKEEPER_IAM_PROFILE"
instance_state:                            <computed>
instance_type:                             "t3.small"
ipv6_address_count:                        <computed>
ipv6_addresses.#:                          <computed>
key_name:                                  "devl-key"
monitoring:                                "false"
network_interface.#:                       <computed>
network_interface_id:                      <computed>
password_data:                             <computed>
placement_group:                           <computed>
primary_network_interface_id:              <computed>
private_dns:                               <computed>
private_ip:                                <computed>
public_dns:                                <computed>
public_ip:                                 <computed>
root_block_device.#:                       "1"
root_block_device.0.delete_on_termination: "true"
root_block_device.0.volume_id:             <computed>
root_block_device.0.volume_size:           "16"
root_block_device.0.volume_type:           "gp2"
security_groups.#:                         <computed>
subnet_id:                                 "subnet-5b8d8200"
tags.%:                                    "3"
tags.Description:                          "Do not terminate more than 
one at a time"
tags.Env:                                  "devl"
tags.Name:                                 "devl-zookeeper-0"
tenancy:                                   <computed>
user_data:                                 
"70fd2ae9f7da42e2fb15328cd6539c4f7ed4a5be"
volume_tags.%:                             <computed>
vpc_security_group_ids.#:                  "1"
vpc_security_group_ids.3423986071:         "sg-03911aa28dbcb3f20"

+ aws_volume_attachment.kafka_att[0]
id:                                        <computed>
device_name:                               "/dev/sdh"
instance_id:                               
"${element(aws_instance.zookeeper.*.id,count.index)}"
volume_id:                                 "vol-021d1530117f31905"

如何使它的行为类似于卷附件...,以便如果实例3死亡,则tf apply将创建卷附件3,而仅创建TG附件3。

0 个答案:

没有答案