我正在使用Web爬网程序,该爬网程序将仅使用请求和bs4来爬网内部链接。
我下面有一个粗略的工作版本,但我不确定如何正确处理链接是否以前已被爬网。
import re
import time
import requests
import argparse
from bs4 import BeautifulSoup
internal_links = set()
def crawler(new_link):
html = requests.get(new_link).text
soup = BeautifulSoup(html, "html.parser")
for link in soup.find_all('a', attrs={'href': re.compile("^http://")}):
if "href" in link.attrs:
print(link)
if link.attrs["href"] not in internal_links:
new_link = link.attrs["href"]
print(new_link)
internal_links.add(new_link)
print("All links found so far, ", internal_links)
time.sleep(6)
crawler(new_link)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('url', help='Pass the website url you wish to crawl')
args = parser.parse_args()
url = args.url
#Check full url has been passed otherwise requests will throw error later
try:
crawler(url)
except:
if url[0:4] != 'http':
print('Please try again and pass the full url eg http://example.com')
if __name__ == '__main__':
main()
以下是输出的最后几行:
All links found so far, {'http://quotes.toscrape.com/tableful', 'http://quotes.toscrape.com', 'http://quotes.toscrape.com/js', 'http://quotes.toscrape.com/scroll', 'http://quotes.toscrape.com/login', 'http://books.toscrape.com', 'http://quotes.toscrape.com/'}
<a href="http://quotes.toscrape.com/search.aspx">ViewState</a>
http://quotes.toscrape.com/search.aspx
All links found so far, {'http://quotes.toscrape.com/tableful', 'http://quotes.toscrape.com', 'http://quotes.toscrape.com/js', 'http://quotes.toscrape.com/search.aspx', 'http://quotes.toscrape.com/scroll', 'http://quotes.toscrape.com/login', 'http://books.toscrape.com', 'http://quotes.toscrape.com/'}
<a href="http://quotes.toscrape.com/random">Random</a>
http://quotes.toscrape.com/random
All links found so far, {'http://quotes.toscrape.com/tableful', 'http://quotes.toscrape.com', 'http://quotes.toscrape.com/js', 'http://quotes.toscrape.com/search.aspx', 'http://quotes.toscrape.com/scroll', 'http://quotes.toscrape.com/random', 'http://quotes.toscrape.com/login', 'http://books.toscrape.com', 'http://quotes.toscrape.com/'}
因此它可以正常工作,但只能一直工作到某个点,然后似乎再也没有遵循链接了。
确定是因为这一行
for link in soup.find_all('a', attrs={'href': re.compile("^http://")}):
那样只能找到以http开头的链接,并且在很多内部页面上链接都没有,但是当我这样尝试时
for link in soup.find_all('a')
程序运行非常简短,然后结束:
http://books.toscrape.com
{'href': 'http://books.toscrape.com'}
http://books.toscrape.com
All links found so far, {'http://books.toscrape.com'}
index.html
{'href': 'index.html'}
index.html
All links found so far, {'index.html', 'http://books.toscrape.com'}
答案 0 :(得分:1)
您可以减少
for link in soup.find_all('a', attrs={'href': re.compile("^http://")}):
if "href" in link.attrs:
print(link)
if link.attrs["href"] not in internal_links:
new_link = link.attrs["href"]
print(new_link)
internal_links.add(new_link)
收件人
links = {link['href'] for link in soup.select("a[href^='http:']")}
internal_links.update(links)
这仅使用通过HTTP协议限定标签元素的抓斗,并使用集合理解来确保没有重复。然后,它将使用任何新链接更新现有集。我不了解足够多的python来评论使用.update的效率,但是我相信它会修改现有集而不是创建一个新集。此处列出了更多组合集合的方法:How to join two sets in one line without using "|"