我正在尝试用BeautifulSoup创建一个表刮。我写了这个Python代码:
import urllib2
from bs4 import BeautifulSoup
url = "http://dofollow.netsons.org/table1.htm" # change to whatever your url is
page = urllib2.urlopen(url).read()
soup = BeautifulSoup(page)
for i in soup.find_all('form'):
print i.attrs['class']
我需要刮Nome,Cognome,Email。
答案 0 :(得分:29)
循环表格行(tr
标记)并获取单元格文本(td
标记):
for tr in soup.find_all('tr')[2:]:
tds = tr.find_all('td')
print "Nome: %s, Cognome: %s, Email: %s" % \
(tds[0].text, tds[1].text, tds[2].text)
打印:
Nome: Massimo, Cognome: Allegri, Email: Allegri.Massimo@alitalia.it
Nome: Alessandra, Cognome: Anastasia, Email: Anastasia.Alessandra@alitalia.it
...
仅供参考,这里的[2:]
切片是跳过两个标题行。
UPD,这里是如何将结果保存到txt文件中的:
with open('output.txt', 'w') as f:
for tr in soup.find_all('tr')[2:]:
tds = tr.find_all('td')
f.write("Nome: %s, Cognome: %s, Email: %s\n" % \
(tds[0].text, tds[1].text, tds[2].text))
答案 1 :(得分:0)
# Libray
from bs4 import BeautifulSoup
# Empty List
tabs = []
# File handling
with open('/home/rakesh/showHW/content.html', 'r') as fp:
html_content = fp.read()
table_doc = BeautifulSoup(html_content, 'html.parser')
# parsing html content
for tr in table_doc.table.find_all('tr'):
tabs.append({
'Nome': tr.find_all('td')[0].string,
'Cogname': tr.find_all('td')[1].string,
'Email': tr.find_all('td')[2].string
})
print(tabs)
答案 2 :(得分:0)
OP发布的原始链接已死...但是您可以通过以下方式用gazpacho抓取表格数据:
第1步-导入Soup
并下载html:
from gazpacho import Soup
url = "https://en.wikipedia.org/wiki/List_of_multiple_Olympic_gold_medalists"
soup = Soup.get(url)
第2步-查找表和表行:
table = soup.find("table", {"class": "wikitable sortable"}, mode="first")
trs = table.find("tr")[1:]
第3步-使用提取所需数据的函数来解析每一行:
def parse_tr(tr):
return {
"name": tr.find("td")[0].text,
"country": tr.find("td")[1].text,
"medals": int(tr.find("td")[-1].text)
}
data = [parse_tr(tr) for tr in trs]
sorted(data, key=lambda x: x["medals"], reverse=True)