加速图像网络刮刀

时间:2018-03-16 13:38:51

标签: python python-3.x web-scraping beautifulsoup

我已经写了我的第一个网络刮刀,(令人惊讶的是)完成了这项工作。我正在为图像抓取一个流行的comic website(其中有超过900个但是问题是刮刀太慢了。

例如,如果我下载10漫画的样本,则每张图片平均需要45秒(样本总共> 40 secs个)如果你问我,这有点太慢,因为每张图片都是约。 80KB800KB的大小。

我已经阅读过我可以切换到lxml异步进行抓取,但该程序包与Python 3.6不兼容。

我试过这个:

pip3 install lxml

只是为了得到这个:

Could not find a version that satisfies the requirement python-lxml (from versions: )
No matching distribution found for python-lxml

所以我的问题是 如何加速刮刀?

也许我的抓取逻辑应该受到指责?最后,有没有办法只为相关部分刮取网页?

这是代码。我已经删除了所有的眼睛糖果&输入验证 - 完整代码here

import re
import time
import requests
import itertools
from requests import get
from bs4 import BeautifulSoup as bs

def generate_comic_link(array, num):
  for link in itertools.islice(array, 0, num):
    yield link

def grab_image_src_url(link):
  req = requests.get(link)
  comic = req.text
  soup = bs(comic, 'html.parser')
  for i in soup.find_all('p'):
    for img in i.find_all('img', src=True):
      return img['src']

def download_image(link):
  file_name = url.split('/')[-1]
  with open(file_name, "wb") as file:
    response = get(url)
    file.write(response.content)

def fetch_comic_archive():
  url = 'http://www.poorlydrawnlines.com/archive/'
  req = requests.get(url)
  page = req.text
  soup = bs(page, 'html.parser')
  all_links = []
  for link in soup.find_all('a'):
    all_links.append(link.get('href'))
  return all_links

def filter_comic_archive(archive):
  pattern = re.compile(r'http://www.poorlydrawnlines.com/comic/.+')
  filtered_links = [i for i in archive if pattern.match(i)]
  return filtered_links

all_comics = fetch_comic_archive()
found_comics = filter_comic_archive(all_comics)

print("\nThe scraper has found {} comics.".format(len(found_comics)))
print("How many comics do you want to download?")
n_of_comics = int(input(">> ").strip())

start = time.time()
for link in generate_comic_link(found_comics, n_of_comics):
  print("Downloading: {}".format(link)
  url = grab_image_src_url(link)
  download_image(url)
end = time.time()
print("Successfully downloaded {} comics in {:.2f} seconds.".format(n_of_comics, end - start))

1 个答案:

答案 0 :(得分:0)

原来解决方案是导入threading。只是使用与问题中相同的代码,这里是解决方案:

...
for link in generate_comic_link(found_comics, n_of_comics):
  print("Downloading: {}".format(link))
  url = grab_image_src_url(link)
  thread = threading.Thread(target=download_image, args=(url,))
  thread.start()
thread.join()
...

这实际上将下载速度提高了几乎50%,即使对于如上所示的粗略代码也是如此。

10张图片样本的下载时间现在约为21秒,与之前的> 40秒相比。

完全重构的代码是here