所以我:我试图创建一个刮刀,隔离页面的引用部分,然后从该网页抓取标题和第一段或类似的东西。 目前,我已经做到了它可以隔离参考页面,但我不确定如何进入“进入”其他链接。
到目前为止,这是我的代码
def customScrape(e1, master):
session = requests.Session()
# selectWikiPage = input("Please enter the Wikipedia page you wish to scrape from")
selectWikiPage = e1.get()
if "wikipedia" in selectWikiPage: #turn this into a re
html = session.post(selectWikiPage)
bsObj = BeautifulSoup(html.text, "html.parser")
findReferences = bsObj.find('ol', {'class': 'references'}) # isolate refereces section of page
href = BeautifulSoup(str(findReferences), "html.parser")
links = [a["href"] for a in href.find_all("a", href=True)]
for link in links:
print("Link: " + link)
else:
print("Error: Please enter a valid Wikipedia URL")
答案 0 :(得分:4)
在customScrape
功能中,您可以为每个链接执行此操作:
ref_html = requests.get(link).text
从link
获取完整的文字(除非您想在后续请求之间保存Cookie和其他内容,否则您不需要Session
)。
然后,您可以将ref_html
解析为find
标题或第一个标题或任何您喜欢的内容。
您的功能可能如下所示:
import requests, requests.exceptions
from bs4 import BeautifulSoup
def custom_scrape(wikipedia_url):
wikipedia_html = requests.get(wikipedia_url).text
refs = BeautifulSoup(wikipedia_html, 'html.parser').find('ol', {
'class': 'references'
})
refs = refs.select('a["class"]')
for ref in refs:
try:
ref_html = requests.get(ref['href']).text
title = heading = BeautifulSoup(ref_html, 'html.parser')
title = title.select('title')
title = title[0].text if title else ''
heading = heading.select('h1')
heading = heading[0].text if heading else ''
except requests.exceptions.RequestException as e:
print(ref['href'], e) # some refs may contain invalid urls
title = heading = ''
yield title.strip(), heading.strip() # strip whitespace
然后您可以查看结果:
for title, heading in custom_scrape('https://en.wikipedia.org/wiki/Stack_Overflow'):
print(title, heading)