使用Ansible部署和配置ec2

时间:2018-11-05 17:12:33

标签: deployment ansible hosts

我正在尝试使用anisble在ec2 / aws上部署和配置集群。 我想将部署和配置作为同一本剧本的一部分进行。

main.yml : 
    - hosts: localhost
      gather_facts: false

  vars_files:
    - vars/main.yml

  tasks:
    - name: Deploy the master for the kubernetes cluster
      include_tasks: tasks/kub_master.yml

    - name: Configure Master Kub node
      include_tasks: tasks/config_kub_master.yml

kub_master.yml

 ---
- name: Deploy the admin node
  ec2:
    region: "{{ region }}"
    key_name: "{{ ssh_key_name }}"
    instance_type: "{{ master_inst_type }}"
    image: "{{ image_id }}"
    count: "{{ master_inst_count }}"
    assign_public_ip: no
    vpc_subnet_id: "{{ subnet_id }}"
    group_id: "{{ sg_id }}"
    wait: yes
    wait_timeout: 1800
    volumes:
      - device_name: /dev/xvda
        volume_type: gp2
        volume_size: 50
        delete_on_termination: true
    user_data: "{{ lookup ('file', '../files/user_data_master.sh') }}"
    instance_tags:
      Name: "{{ kub_cluster }}-admin-node"
      lob: "{{ tags_lob }}"
      project: "{{ tags_project }}"
      component: "{{ kub_cluster}}_kub_master_node"
      contact_email: "{{ tags_contact_email }}"
      product: "{{ tags_product }}"
  async: 45
  poll: 25
  register: kub_mas

Kub_configure.yml

---
- hosts: "{{ kub_mas.instances[0].private_ip }}"
  gather_facts: true
  remote_user: remote_user
  shell: " cat /etc/redhat-release " 

但是,这似乎在Kub_configure端点上不起作用,因为它似乎在远程执行上失败。

我们如何部署和使用部署中的ip来通过单个可播放剧本配置节点?

这是ansible运行的输出: 您可以看到任务正在尝试在本地执行,尽管我正在尝试提供远程地址。

TASK [Configure Master Kub node] ******************************************************************************************************************************************************************************
task path: /home/username/Repo_S/kube_cluster/cluster_deploy/cluster_deploy.yml:11
Read vars_file 'vars/main.yml'
included: /home/username/Repo_S/kube_cluster/cluster_deploy/tasks/master/master_config.yml for localhost
Read vars_file 'vars/main.yml'
Read vars_file 'vars/main.yml'

TASK [shell] **************************************************************************************************************************************************************************************************
task path: /home/username/Repo_S/kube_cluster/cluster_deploy/tasks/master/master_config.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USERe/username
<localhost> EXEC /bin/sh -c 'echoe/username && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813 `" && echo ansible-tmp-1541439662.55-121688078512813="` echo /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813 `" ) && sleep 0'
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/commands/command.py
<localhost> PUT /home/username/.ansible/tmp/ansible-local-30214U7V93F/tmpmcASIl TO /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/ /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py && sleep 0'
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-sywhejuzolifjntwhpbxlesbbbutlegn; /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py'"'"' && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "sudo: a password is required\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE",
    "rc": 1
}
    to retry, use: --limit @/home/username/Repo_S/kube_cluster/cluster_deploy/cluster_deploy.retry

PLAY RECAP ****************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=1

1 个答案:

答案 0 :(得分:0)

在Kub_configure.yml中

尝试使用 成为:是

这可能会解决问题

---
- hosts: "{{ kub_mas.instances[0].private_ip }}"
  gather_facts: true
  become: yes
  remote_user: remote_user
  shell: " cat /etc/redhat-release "