熊猫read_html如何只能从整个DataFrame中获取选定的列

时间:2018-08-29 15:52:15

标签: python pandas

我正试图从html页面提取html数据的特定列。

  

1)HTML数据格式

            VM Name           User Name        Image Name                           Network  VCPUS  Memory(GB)  Disk(GB) Tenant     Region      KVM Host Power State                          URL               Created
0      dbsw-powerbi  anokhe@ezy.com           unknown   {u'VLAN181': [u'192.168.57.91']}      4          16       100    APP  DBS-AP-IN  dbs-appkvm03          On  https://compute.ezy.com  2018-08-02T10:30:07Z
1           pciedip  anokhe@ezy.com     dbsVDI-RHEL65   {u'VLAN181': [u'192.168.57.37']}      4          32       200    APP  DBS-AP-IN  dbs-appkvm01          On  https://compute.ezy.com  2018-04-18T06:39:38Z
2  dbs-spbdatasync1  anokhe@ezy.com    dbsVDI-RHEL510  {u'VLAN181': [u'192.168.57.156']}      1           8        50    APP  DBS-AP-IN     dbs-kvm13          On  https://compute.ezy.com  2018-04-05T09:51:29Z
3      dbsw-russian  anokhe@ezy.com  dbsVDI-WIN764-V1  {u'VLAN181': [u'192.168.57.216']}      1           4       100    APP  DBS-AP-IN  dbs-appkvm01          On  https://compute.ezy.com  2018-04-02T06:25:25Z
4   dbs-spbdatasync  anokhe@ezy.com    dbsVDI-RHEL510  {u'VLAN181': [u'192.168.57.233']}      1           8        50    APP  DBS-AP-IN     dbs-kvm13          On  https://compute.ezy.com  2018-04-02T05:03:03Z

我只是尝试使用熊猫read_html获取DataFrame,但无法理解从DataFrame获取特定列。我需要从13列中选择['VM Name', 'User Name', 'Network', 'Region']列。

  

2)代码段

from __future__ import print_function
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)
import pandas as pd
##### Python pandas, widen output display to see more columns. ####
pd.set_option('display.height', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('expand_frame_repr', True)

# print(pd.read_excel('ssd.xlsx'))
# Data = pd.read_html('http://openstacksearch/vm_list.html', header=0, flavor='bs4', index_col=['VM Name', 'User Name', 'Network', 'Region'])
Data = pd.read_html('http://openstacksearch/vm_list.html', header=0, flavor='bs4')
print(Data[0].head())

2 个答案:

答案 0 :(得分:1)

选择您可以使用的列的子集

Data = pd.read_html('http://openstacksearch/vm_list.html', header=0, flavor='bs4')
Data = Data[['VM Name', 'User Name', 'Network', 'Region']]

答案 1 :(得分:1)

从已处理的DataFrame中选择read_html,然后使用基于多索引的方法选择所需的列时,我得到了解决方案。感谢Adrew提出的解决方案。

因此,代码如下所示……可能对某人有所帮助

import pandas as pd
##### Python pandas, widen output display to see more columns. ####
pd.set_option('display.height', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('expand_frame_repr', True)
###### Data Extraction ##################
'''
pd.read_html returns you a list with one element and that 
element is the pandas dataframe, i.e.
Data = pd.read_html('url') will produce a list
Data[0]  Will return a pandas DataFrame
'''
Data = pd.read_html('http://openstacksearch/vm_list.html', header=0, flavor='bs4')[0]
Data1 = Data[['VM Name', 'User Name', 'Network', 'Region']]
print(Data1)