是否可以从维基百科文章中的子链接中删除数据
import bs4 as bs
import urllib.request
import re
sauce = urllib.request.urlopen('https://en.wikipedia.org/wiki/Greenhouse_gas').read()
soup=bs.BeautifulSoup(sauce,'lxml')
links = soup.find("div",{"id" : "bodyContent"}).findAll("a" , href=re.compile("(/wiki/)+([A-Za-z0-9_:()])+"))
for link in links:
print(link['href'])
webpage=urllib.request.urlopen(link['href'])
soup=bs.BeautifulSoup(webpage,'lxml')
答案 0 :(得分:0)
您的ValueError: unknown url type: '/wiki/Wikipedia:Pending_changes'
列表包含您要抓取的网址的结尾。运行代码后,我收到了s your issue, try this:
beg_link = 'http://www.wikipedia.com'
for link in links:
full_link = beg_link + link['href']
print(full_link)
webpage=urllib.request.urlopen(full_link)
soup=bs.BeautifulSoup(webpage,'lxml')
。所以要解决我的想法
http://www.wikipedia.com/wiki/Wikipedia:Pending_changes
http://www.wikipedia.com/wiki/GHG_(disambiguation)
http://www.wikipedia.com/wiki/File:Greenhouse_Effect.svg
...
打印和结果:
<paper-icon-button icon="my-icons:menu" on-tap="_joesDrawerToggle"></paper-icon-button>
答案 1 :(得分:0)
是的,可以关注链接并检索更多链接。为此,您可以使用递归函数(一个自我调用的函数)。您还应该对您检索的链接数量设置一个限制,否则您的程序将不会停止,您应该检查您是否已经访问过链接:
scope