如何使用pyrax将Cloud Block存储卷附加到OnMetal服务器?

时间:2016-02-08 19:45:09

标签: rackspace-cloud rackspace pyrax

我想通过编写一个使用Cloud Block Storage volume Python模块的Python脚本,自动将pyrax附加到运行CentOS 7的OnMetal服务器上。你知道怎么做吗?

1 个答案:

答案 0 :(得分:1)

将云块存储卷附加到OnMetal服务器比将其附加到普通Rackspace虚拟服务器要复杂一些。当您尝试将Cloud Block存储卷连接到Rackspace Web界面 Cloud Control Panel 中的OnMetal服务器时,您会注意到,正如您将看到的那样:

注意:将卷附加到OnMetal服务器时,必须登录OnMetal服务器以设置启动器名称,发现目标,然后连接到目标。

因此,您可以在Web界面中附加卷,但另外您需要登录OnMetal服务器并运行一些命令。可以将实际命令从Web界面复制并粘贴到OnMetal服务器的终端中。

在分离之前,您还需要运行命令。

但实际上并不需要Web界面。可以使用Python模块 pyrax

完成

首先在OnMetal服务器上安装RPM软件包 iscsi-initiator-utils

[root@server-01 ~]# yum -y install iscsi-initiator-utils

假设已知volume_id和server_id,则此Python代码首先附加卷,然后分离卷。不幸的是,mount_point参数attach_to_instance()不适用于OnMetal服务器,因此我们需要在附加卷之前和之后使用命令lsblk -n -d。通过比较输出,我们将推断出用于附加卷的设备名称。 (推导设备名称的部分不由以下Python代码处理)。

#/usr/bin/python
# Disclaimer: Use the script at your own Risk!                                                                                                                                    
import json
import os
import paramiko
import pyrax

# Replace server_id and volume_id                                                                                                                                                                       
# to your settings                                                                                                                                                                                      
server_id = "cbdcb7e3-5231-40ad-bba6-45aaeabf0a8d"
volume_id = "35abb4ba-caee-4cae-ada3-a16f6fa2ab50"
# Just to demonstrate that the mount_point argument for                                                                                                                                                 
# attach_to_instance() is not working for OnMetal servers                                                                                                                                               
disk_device = "/dev/xvdd"

def run_ssh_commands(ssh_client, remote_commands):
    for remote_command in remote_commands:
        stdin, stdout, stderr = ssh_client.exec_command(remote_command)
        print("")
        print("command: " + remote_command)
        for line in stdout.read().splitlines():
            print(" stdout: " + line)
        exit_status = stdout.channel.recv_exit_status()
        if exit_status != 0:
            raise RuntimeError("The command :\n{}\n"
                               "exited with exit status: {}\n"
                               "stderr: {}".format(remote_command,
                                                   exit_status,
                                                   stderr.read()))

pyrax.set_setting("identity_type", "rackspace")
pyrax.set_default_region('IAD')
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
server = pyrax.cloudservers.servers.get(server_id)
vol = pyrax.cloud_blockstorage.find(id = volume_id)
vol.attach_to_instance(server, mountpoint=disk_device)
pyrax.utils.wait_until(vol, "status", "in-use", interval=3, attempts=0,
                       verbose=True)

ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server.accessIPv4, username='root', allow_agent=True)

# The new metadata is only available if we get() the server once more                                                                                                                                   
server = pyrax.cloudservers.servers.get(server_id)

metadata = server.metadata["volumes_" + volume_id]
parsed_json = json.loads(metadata)
target_iqn = parsed_json["target_iqn"]
target_portal = parsed_json["target_portal"]
initiator_name = parsed_json["initiator_name"]

run_ssh_commands(ssh_client, [
    "lsblk -n -d",
    "echo InitiatorName={} > /etc/iscsi/initiatorname.iscsi".format(initiator_name),
    "iscsiadm -m discovery --type sendtargets --portal {}".format(target_portal),
    "iscsiadm -m node --targetname={} --portal {} --login".format(target_iqn, target_portal),
    "lsblk -n -d",
    "iscsiadm -m node --targetname={} --portal {} --logout".format(target_iqn, target_portal),
    "lsblk -n -d"
])

vol.detach()
pyrax.utils.wait_until(vol, "status", "available", interval=3, attempts=0,
                                    verbose=True)

运行python代码如下所示

user@ubuntu:~$ python attach.py 2> /dev/null
Current value of status: attaching (elapsed:  1.0 seconds)
Current value of status: in-use (elapsed:  4.9 seconds)

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk

command: echo InitiatorName=iqn.2008-10.org.openstack:a24b6f80-cf02-48fc-9a25-ccc3ed3fb918 > /etc/iscsi/initiatorname.iscsi

command: iscsiadm -m discovery --type sendtargets --portal 10.190.142.116:3260
 stdout: 10.190.142.116:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50
 stdout: 10.69.193.1:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --login
 stdout: Logging in to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] (multiple)
 stdout: Login to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
 stdout: sdb    8:16   0   50G  0 disk

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --logout
 stdout: Logging out of session [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260]
 stdout: Logout of [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
Current value of status: detaching (elapsed:  0.8 seconds)
Current value of status: available (elapsed:  4.7 seconds)
user@ubuntu:~$

只是,另外一个注意事项:

虽然官方Rackspace文档中未提及

https://support.rackspace.com/how-to/attach-a-cloud-block-storage-volume-to-an-onmetal-server/

2015年8月5日forum post中的Rackspace托管基础架构支持 还建议运行

iscsiadm -m node -T $TARGET_IQN -p $TARGET_PORTAL --op update -n node.startup -v automatic

使连接保持持久性,以便在启动时自动重启iscsi会话。

<强>更新

关于推断新设备名称: 海登少校用blog post写道

[root@server-01 ~]# ls /dev/disk/by-path/

可用于查找新设备的路径。 如果你想尊重任何符号链接,我想这会起作用

[root@server-01 ~]# find -L /dev/disk/by-path -maxdepth 1 -mindepth 1 -exec realpath {} \;