Cinder Volume Create:没有有效的主机没有可用的称重主机

时间:2016-09-16 16:07:12

标签: openstack cinder

我正在尝试在4节点堆栈上安装和配置OpenStack Mitaka。 1个控制器,1个计算,1个块存储和1个对象存储。尝试创建阻止存储节点时,我无法通过仪表板创建卷。基础OS OS Ubuntu 14.04,就像我之前说的Mitaka发布的OpenStack一样。

这是控制器节点上的cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi


[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = *********

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = **********


[database]
connection = mysql+pymysql://cinder:********@controller/cinder

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

以下是Cinder(块存储)节点上的cinder.conf

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.41

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = **********
enabled_backends = lvm
glance_api_servers = http://controller:9292

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = ********

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[database]
#connection = mysql+pymysql://cinder:*******@controller/cinder
connection = mysql+pymysql://cinder:*******@controller/cinder
#connection = mysql://cinder:******@controller/cinder

[api_database]
connection = mysql+pymysql://cinder:*******@controller/cinder_api



[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm

创建卷后的状态为“错误”。以下是 cinder-scheduler.log 文件中的错误行控制器节点

2016-09-07 17:14:22.291 10607 ERROR cinder.scheduler.flows.create_volume [req-272c5387-a2e3-4371-8a14-8330831910d0 a43909277cbb418fa12fab4d22e0586c 64d180e39e2345ac9bbcd0c389b0a7c4 - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid host was found. No weighed hosts available

这是我认为错误消息中最重要的部分:

volume:create:找不到有效的主机。没有可用的称重主机

当我从控制器节点运行命令“cinder service-list”时,我得到以下输出:

+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2016-09-07T22:13:11.000000 |        -        |
|  cinder-volume   |   cinder   | nova | enabled |   up  | 2016-09-07T22:13:30.000000 |        -        |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

有趣的是,主机名是 cinder 。在Mitaka安装指南中,主机名是block1 @ lvm。不确定为什么我的不同,或者甚至是相关的。发现它很有趣,也许是我的问题的线索。

这让我相信Cinder节点和Controller节点能够“看到”或相互通信。我相信我已经在Cinder节点内正确配置了lvm。以防这里是lvm.conf文件中的过滤器部分:

filter = [ "a/sda/", "a ...

说完这一切。我认为它是分区/硬盘格式问题。或者,一个rabbitmq(消息服务)问题。我确实在 Cinder Node 上安装了rabbitmq-server,我知道这不是指南设置的方式,这意味着它可能是错误的。我现在要做的是从 Cinder Node 中删除rabbitmq-server。我相信我会遇到的问题是 Cinder Node Controller Node 不会相互“看到”。如果是这种情况,那么我现在运行的3个节点中的任何一个节点上的cons文件可能有问题吗?现在运行的3个节点将是Controller,Compute和Cinder。

让我知道你们的想法。如果您发现我的利弊文件存在问题,请告诉我。最后一段是解释我的想法,以及项目的当前状态。如果您在我的逻辑中看到错误,或者认为可能有更好的方法来解决问题,我全都耳朵!

谢谢大家!

2 个答案:

答案 0 :(得分:0)

首先检查vgs命令的输出。如果你通过packstack安装了openstack(和我一样)。默认卷大小为20GB左右。您可以查看packstack应答文件以确认或查看卷组大小

<强> CONFIG_CINDER_VOLUMES_SIZE = 20G

如果要扩展此卷组的大小,请使用this link: -

希望这能解决您的问题。

答案 1 :(得分:0)

您在错误的部分放置了enabled_backends键。它将在控制器以及存储节点的[默认]部分中定义。