我正在尝试以特定格式从Wikipedia页面上抓取列表(例如https://de.wikipedia.org/wiki/Liste_der_Bisch%C3%B6fe_von_Sk%C3%A1lholt这样的页面)。我遇到使“ li”和“ a href”匹配的问题。
例如,在上面的页面中,第九个项目符号带有文本:
1238至1268年:SigvarðurÞéttmarsson(Norweger)
使用HTML:
<li>1238–1268: <a href="/wiki/Sigvar%C3%B0ur_%C3%9E%C3%A9ttmarsson" title="Sigvarður Þéttmarsson">Sigvarður Þéttmarsson</a> (Norweger)</li>
我想把它当作字典:
'1238至1268年:SigvarðurÞéttmarsson(Norweger)': '/维基/ Sigvar%C3%B0ur_%C3%9E%C3%A9ttmarsson'
['li'和'a'子代的两个部分的整个文本]:['a'子代的href]
我知道我可以使用LXML / etree要做到这一点,但我不完全知道如何。下面的一些重组?
from lxml import etree
tree = etree.HTML(html)
bishops = tree.cssselect('li').text for bishop
text = [li.text for li in bishops]
links = tree.cssselect('li a')
hrefs = [bishop.get('href') for bishop in links]
答案 0 :(得分:0)
更新:我已经使用BeautifulSoup弄清楚了这一点,如下所示:
from bs4 import BeautifulSoup
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
bishops_with_links = {}
bishops = soup.select('li')
for bishop in bishops:
if bishop.findChildren('a'):
bishops_with_links[bishop.text] = 'https://de.wikipedia.org' + bishop.a.get('href')
else:
bishops_with_links[bishop.text] = ''
return bishops_with_links