使用Python Beautifulsoup进行Web爬网表和数据

时间:2019-12-18 10:45:59

标签: python html python-3.x web-scraping beautifulsoup

我已使用Python-Beautifulsoup将此表的数据从website的所有页面抓取到字典中,如以下代码所示。

但是,我也试图将每个拥有自己的page的公司都刮到字典中。

import requests 
from bs4 import BeautifulSoup
from pprint import pprint 

company_data = []

for i in range(1, 3):
    page = requests.get(f'https://web.archive.org/web/20121007172955/http://www.nga.gov/collection/anZ1.htm{i}?')
    soup = BeautifulSoup(page.text, "lxml")

    row_info = soup.select('div.accordion_heading.panel-group.s_list_table')

    for row_info in row_info:
        comapny_info = {}
        comapny_info['Name'] = row_info.select_one('div.col_1 a').text.strip()

pprint(company_data)

1 个答案:

答案 0 :(得分:1)

我只为2M公司做过 我相信会有所帮助。

import requests
from bs4 import BeautifulSoup
res=requests.get("https://web.archive.org/web/20121007172955/http://www.nga.gov/collection/anZ1.htm").text
soup=BeautifulSoup(res,'html.parser')
comapny_info={}
comapny_info['Profile'] = soup.select('div.text-desc-members')
if len(soup.select('div.text-desc-members'))==0:
  comapny_info['Profile']  = soup.select('div.list-sub')[0].text.strip()

comapny_info['ACOP']=[item['href'] for item in soup.select(".table.table-striped a.files")]
comapny_info['QuestionAnswer']=["Question:" + q.text.strip() + " Answer:" +a.text.strip() for q ,a in zip(soup.select("div.list-reports .m_question"),soup.select("div.list-reports .m_answer")) ]

print(comapny_info)