使用BeautifulSoup从html解析表并将其另存为csv时出现问题

时间:2018-12-01 10:28:16

标签: python web-scraping beautifulsoup

import requests
import csv
import requests
from bs4 import BeautifulSoup

r = requests.get('https://pqt.cbp.gov/report/YYZ_1/12-01-2017')
soup = BeautifulSoup(r)
table = soup.find('table', attrs={ "class" : "table-horizontal-line"})
headers = [header.text for header in table.find_all('th')]
rows = []
for row in table.find_all('tr'):
    rows.append([val.text.encode('utf8') for val in row.find_all('td')])

with open('output_file.csv', 'wb') as f:
    writer = csv.writer(f)
    writer.writerow(headers)
    writer.writerows(row for row in rows if row)

我正在尝试解析此特定网页中的所有表数据:https://pqt.cbp.gov/report/YYZ_1/12-01-2017

soup = BeautifulSoup(r)行中出现错误。我收到错误TypeError: object of type 'Response' has no len()。我也不确定我的逻辑是否正确。请帮助我粘贴表格数据。

3 个答案:

答案 0 :(得分:1)

我会这样

import pandas as pd
result = pd.read_html("https://pqt.cbp.gov/report/YYZ_1/12-01-2017")
df = result[0]
# df = df.drop(labels='Unnamed: 8', axis=1)
df.to_csv(r'C:\Users\User\Desktop\Data.csv', sep=',', encoding='utf-8',index = False )

答案 1 :(得分:0)

尝试:

r = requests.get('https://pqt.cbp.gov/report/YYZ_1/12-01-2017')
soup = BeautifulSoup(r.content)

答案 2 :(得分:0)

变量r的类型为Response而不是str,使用r.textr.content并且没有类为table-horizontal-line的表,您呢是results吗?

soup = BeautifulSoup(r.text)
table = soup.find('table', attrs={"class" : "results"})