为什么在使用OpenStack-Ansible进行安装时管理容器没有收到IP-?

时间:2018-11-06 12:45:01

标签: ansible openstack lxc

出于测试目的,我想使用Ansible在两个VirtualBox实例上安装OpenStack。 正如documentation所说,我用四个VLAN预配置了本地网络。并创建网桥接口。之后,网络连接就可以了。

我还配置了 openstack_user_config.yml 文件。

---
cidr_networks:
  container: 172.29.236.0/22
  tunnel: 172.29.240.0/22
  storage: 172.29.244.0/22
used_ips:
  - "172.29.236.1,172.29.236.255"
  - "172.29.240.1,172.29.240.255"
  - "172.29.244.1,172.29.244.255"

global_overrides:
  internal_lb_vip_address: 192.168.33.22
  external_lb_vip_address: dev-ows.hive
  tunnel_bridge: "br-vxlan"
  management_bridge: "br-mgmt"
  provider_networks:
    - network:
      container_bridge: "br-mgmt"
      container_type: "veth"
      container_interface: "eth1"
      ip_from_q: "container"
      type: "raw"
      group_binds:
        - all_containers
        - hosts
      is_container_address: true
    - network:
      container_bridge: "br-vxlan"
      container_type: "veth"
      container_interface: "eth10"
      ip_from_q: "tunnel"
      type: "vxlan"
      range: "1:1000"
      net_name: "vxlan"
      group_binds:
        - neutron_linuxbridge_agent
    - network:
      container_bridge: "br-vlan"
      container_type: "veth"
      container_interface: "eth11"
      type: "flat"
      net_name: "flat"
      group_binds:
        - neutron_linuxbridge_agent
    - network:
      container_bridge: "br-storage"
      container_type: "veth"
      container_interface: "eth2"
      ip_from_q: "storage"
      type: "raw"
      group_binds:
        - glance_api
        - cinder_api
        - cinder_volume
        - nova_compute
...

但是在运行剧本后出现错误:

# openstack-ansible setup-hosts.yml
...
TASK [lxc_container_create : Gather container facts] *********************************************************************************************************************************************************************************************************
fatal: [controller01_horizon_container-6da3ab23]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_horizon_container-6da3ab23\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [controller01_utility_container-3d6724b2]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_utility_container-3d6724b2\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [controller01_keystone_container-01c915b6]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_keystone_container-01c915b6\". Make sure this host can be reached over ssh", "unreachable": true}
...

我发现Ansible剧本创建的LXC容器没有接口,因此也没有IP地址。这就是为什么当Ansible通过ssh连接到这些容器时,我会收到“主机无法访问”错误。

# lxc-ls -f
NAME                                           STATE   AUTOSTART GROUPS            IPV4 IPV6 UNPRIVILEGED
controller01_cinder_api_container-e80b0c98     RUNNING 1         onboot, openstack -    -    false
controller01_galera_container-2f58aec8         RUNNING 1         onboot, openstack -    -    false
controller01_glance_container-a2607024         RUNNING 1         onboot, openstack -    -    false
controller01_heat_api_container-d82fd06a       RUNNING 1         onboot, openstack -    -    false
controller01_horizon_container-6da3ab23        RUNNING 1         onboot, openstack -    -    false
controller01_keystone_container-01c915b6       RUNNING 1         onboot, openstack -    -    false
controller01_memcached_container-352c2b47      RUNNING 1         onboot, openstack -    -    false
controller01_neutron_server_container-60ce9d02 RUNNING 1         onboot, openstack -    -    false
controller01_nova_api_container-af09cbb9       RUNNING 1         onboot, openstack -    -    false
controller01_rabbit_mq_container-154e35fe      RUNNING 1         onboot, openstack -    -    false
controller01_repo_container-bb1ebb24           RUNNING 1         onboot, openstack -    -    false
controller01_rsyslog_container-07902098        RUNNING 1         onboot, openstack -    -    false
controller01_utility_container-3d6724b2        RUNNING 1         onboot, openstack -    -    false

请给我一些有关我做错事情的建议。

1 个答案:

答案 0 :(得分:2)

您已经注意到容器未获得管理IP。

您确定两个虚拟机上的br-mgmt网桥按预期工作吗?通过br-mgmt检查这两个主机之间的连接。例如。 ping两个主机之间的br-mgmt ip地址。

如果正确设置了VLAN和桥接器,则应该能够通过特定的桥接器在主机之间建立连接。

$ ansible -vi inventory/myos all -m shell -a "ip route" --limit infra,compute
Using /etc/ansible/ansible.cfg as config file
infra2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.12 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.12 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.12 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.12 

infra1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.11 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.11 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.11 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.11 

infra3 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.13 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.13 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.13 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.13

compute1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.16 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.16 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.16 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.16 

compute2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.17 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.17 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.17 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.17 

因此,从上述任何主机使用br-mgmt IP(172.29.236.x),我应该可以使用相同的br-mgmt子网访问对等端。