如何动态获取EC 2的私有IP并将其放入/ etc / hosts中

时间:2018-01-10 09:45:08

标签: terraform

我想使用Terraform创建多个EC2实例,并在每个实例上将实例的私有IP地址写入/etc/hosts

目前我正在尝试以下代码,但它无效:

resource "aws_instance" "ceph-cluster" {
  count = "${var.ceph_cluster_count}"
  ami           = "${var.app_ami}"
  instance_type = "t2.small"
  key_name      = "${var.ssh_key_name}"

  vpc_security_group_ids = [
    "${var.vpc_ssh_sg_ids}",
    "${aws_security_group.ceph.id}",
  ]

  subnet_id                   = "${element(split(",", var.subnet_ids), count.index)}"

  associate_public_ip_address = "true"
  // TODO 一時的にIAM固定
  //iam_instance_profile        = "${aws_iam_instance_profile.app_instance_profile.name}"
  iam_instance_profile        = "${var.iam_role_name}"

  root_block_device {
    delete_on_termination = "true"
    volume_size           = "30"
    volume_type           = "gp2"
  }

  connection {
    user        = "ubuntu"
    private_key = "${file("${var.ssh_key}")}"
    agent = "false"
  }

  provisioner "file" {
    source      = "../../../scripts"
    destination = "/home/ubuntu/"
  }

  tags {
    Name = "${var.infra_name}-ceph-cluster-${count.index}"
    InfraName = "${var.infra_name}"
  }

  provisioner "remote-exec" {
      inline = [
        "cat /etc/hosts",
        "cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
        "cp -arp  ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
        "chmod 700 ~/.ssh/ceph_rsa",
        "echo 'IdentityFile    ~/.ssh/ceph_rsa' >> ~/.ssh/config",
        "echo 'User            ubuntu' >> ~/.ssh/config",
        "echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
        "echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
      ]
  }

}


aws_instance.ceph-cluster. *. private_ip

我想获得上述命令的结果并将其放入/etc/hosts

3 个答案:

答案 0 :(得分:2)

Terraform供应商公开了self语法,用于获取有关正在创建的资源的数据。

如果您只对正在创建的实例的私有IP地址感兴趣,可以使用${self.private_ip}来实现此目的。

不幸的是,如果您需要获取多个子资源的IP地址(例如,使用count元属性创建的子资源),那么您需要使用{{3}在资源的配置者之外执行此操作。 }。

资源提供者文档显示了一个很好的用例:

resource "aws_instance" "cluster" {
  count = 3
  ...
}

resource "null_resource" "cluster" {
  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
  }

  # Bootstrap script can run on any instance of the cluster
  # So we just choose the first in this case
  connection {
    host = "${element(aws_instance.cluster.*.public_ip, 0)}"
  }

  provisioner "remote-exec" {
    # Bootstrap script called with private_ip of each node in the clutser
    inline = [
      "bootstrap-cluster.sh ${join(" ", aws_instance.cluster.*.private_ip)}",
    ]
  }
}

但在你的情况下,你可能想要这样的东西:

resource "aws_instance" "ceph-cluster" {
  ...
}

resource "null_resource" "ceph-cluster" {
  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.ceph-cluster.*.id)}"
  }

  connection {
    host = "${element(aws_instance.cluster.*.public_ip, count.index)}"
  }

  provisioner "remote-exec" {
      inline = [
        "cat /etc/hosts",
        "cat ~/scripts/ceph/ceph_rsa.pub >> ~/.ssh/authorized_keys",
        "cp -arp  ~/scripts/ceph/ceph_rsa ~/.ssh/ceph_rsa",
        "chmod 700 ~/.ssh/ceph_rsa",
        "echo 'IdentityFile    ~/.ssh/ceph_rsa' >> ~/.ssh/config",
        "echo 'User            ubuntu' >> ~/.ssh/config",
        "echo '${aws_instance.ceph-cluster.0.private_ip} node01 ceph01' >> /etc/hosts ",
        "echo '${aws_instance.ceph-cluster.1.private_ip} node02 ceph02' >> /etc/hosts "
      ]
  }
}

答案 1 :(得分:1)

这可能是Terrafrom / WebPushNotification的一块蛋糕。在null_resources中没有必要,只需要最少的烦恼:

引导基础设施

$ terrafrom apply

准备Sparrowform privision场景,将所有节点的公共ips / dns名称插入到每个节点的/ etc / hosts文件中

$ cat sparrowfile

#!/usr/bin/env perl6

use Sparrowform;

my @hosts = (
  "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4",
  "::1         localhost localhost.localdomain localhost6 localhost6.localdomain6"
);

for tf-resources() -> $r {
  my $rd = $r[1]; # resource data
  next unless $rd<public_ip>;
  next unless $rd<public_dns>;
  next if $rd<public_ip> eq input_params('Host');
  push @hosts, $rd<public_ip> ~ ' ' ~ $rd<public_dns>;
}

file '/etc/hosts', %(
  action  => 'create',
  content => @hosts.join("\n")
);

给它一个运行,Sparrowform将在每个节点上执行方案

$ sparrowform --bootstrap --ssh_private_key=~/.ssh/aws.key --ssh_user=ec2-user

PS。披露 - 我是工具作者

答案 2 :(得分:1)

我对数据库集群有类似的需求(某种穷人的Consul替代方案),我结束了使用以下Terraform文件:

variable "cluster_member_count" {
  description = "Number of members in the cluster"
  default = "3"
}
variable "cluster_member_name_prefix" {
  description = "Prefix to use when naming cluster members"
  default = "cluster-node-"
}
variable "aws_keypair_privatekey_filepath" {
  description = "Path to SSH private key to SSH-connect to instances"
  default = "./secrets/aws.key"
}

# EC2 instances
resource "aws_instance" "cluster_member" {
  count = "${var.cluster_member_count}"
  # ...
}

# Bash command to populate /etc/hosts file on each instances
resource "null_resource" "provision_cluster_member_hosts_file" {
  count = "${var.cluster_member_count}"

  # Changes to any instance of the cluster requires re-provisioning
  triggers {
    cluster_instance_ids = "${join(",", aws_instance.cluster_member.*.id)}"
  }
  connection {
    type = "ssh"
    host = "${element(aws_instance.cluster_member.*.public_ip, count.index)}"
    user = "ec2-user"
    private_key = "${file(var.aws_keypair_privatekey_filepath)}"
  }
  provisioner "remote-exec" {
    inline = [
      # Adds all cluster members' IP addresses to /etc/hosts (on each member)
      "echo '${join("\n", formatlist("%v", aws_instance.cluster_member.*.private_ip))}' | awk 'BEGIN{ print \"\\n\\n# Cluster members:\" }; { print $0 \" ${var.cluster_member_name_prefix}\" NR-1 }' | sudo tee -a /etc/hosts > /dev/null",
    ]
  }
}

一条规则是,每个集群成员均由cluster_member_name_prefix Terraform变量命名,后跟计数索引(从0开始): cluster-node-0 cluster- node-1 等。

这会将以下行添加到每个“ aws_instance.cluster_member”资源的/etc/hosts文件中(每个成员的确切行和顺序相同):

# Cluster members:
10.0.1.245 cluster-node-0
10.0.1.198 cluster-node-1
10.0.1.153 cluster-node-2

就我而言,填充null_resource文件的/etc/hosts是由EBS卷附件触发的,但是"${join(",", aws_instance.cluster_member.*.id)}"触发器也应该可以正常工作。

此外,为了进行本地开发,我添加了local-exec配置器,以在cluster_ips.txt文件中本地记录每个IP:

resource "null_resource" "write_resource_cluster_member_ip_addresses" {
  depends_on = ["aws_instance.cluster_member"]

  provisioner "local-exec" {
    command = "echo '${join("\n", formatlist("instance=%v ; private=%v ; public=%v", aws_instance.cluster_member.*.id, aws_instance.cluster_member.*.private_ip, aws_instance.cluster_member.*.public_ip))}' | awk '{print \"node=${var.cluster_member_name_prefix}\" NR-1 \" ; \" $0}' > \"${path.module}/cluster_ips.txt\""
    # Outputs is:
    # node=cluster-node-0 ; instance=i-03b1f460318c2a1c3 ; private=10.0.1.245 ; public=35.180.50.32
    # node=cluster-node-1 ; instance=i-05606bc6be9639604 ; private=10.0.1.198 ; public=35.180.118.126
    # node=cluster-node-2 ; instance=i-0931cbf386b89ca4e ; private=10.0.1.153 ; public=35.180.50.98
  }
}

然后,使用以下shell命令,可以将它们添加到本地/etc/hosts文件中:

awk -F'[;=]' '{ print $8 " " $2 " #" $4 }' cluster_ips.txt >> /etc/hosts

示例:

35.180.50.32 cluster-node-0 # i-03b1f460318c2a1c3
35.180.118.126 cluster-node-1 # i-05606bc6be9639604
35.180.50.98 cluster-node-2 # i-0931cbf386b89ca4e