使用Requests和BeautifulSoup进行Python多线程处理

时间:2017-01-02 02:19:01

标签: python multithreading web-scraping beautifulsoup python-requests

我正在写一个网络刮刀。我本来可以用scrapy但是决定从头开始写它所以我可以练习。

我创建了一个使用请求和BeautifulSoup成功运行的scraper。它导航大约135页,每页有12个项目,抓取链接,然后从链接目标抓取信息。最后,它将所有内容写入CSV文件中。它只抓取字符串,它不会下载任何图像或类似的东西......现在。

问题?这很慢。只需从一页的内容中获取所有内容大约需要5秒钟,因此135次大约需要11分钟。

所以我的问题是如何在我的代码中实现线程,以便更快地获取数据。

以下是代码:

import requests
from bs4 import BeautifulSoup
import re
import csv


def get_actor_dict_from_html(url, html):
    soup = BeautifulSoup(html, "html.parser")

    #There must be a better way to handle this, but let's assign a NULL value to all upcoming variables.
    profileName = profileImage = profileHeight = profileWeight = 'NULL'

    #Let's get the name and image..
    profileName = str.strip(soup.find('h1').get_text())
    profileImage = "http://images.host.com/actors/" + re.findall(r'\d+', url)[0] + "/actor-large.jpg"

    #Now the rest of the stuff..
    try:
        profileHeight = soup.find('a', {"title": "Height"}).get_text()
    except:
        pass
    try:
        profileWeight = soup.find('a', {"title": "Weight"}).get_text()
    except:
        pass

    return {
        'Name': profileName,
        'ImageUrl': profileImage,
        'Height': profileHeight,
        'Weight': profileWeight,
        }


def lotta_downloads():
    output = open("/tmp/export.csv", 'w', newline='')
    wr = csv.DictWriter(output, ['Name','ImageUrl','Height','Weight'], delimiter=',')
    wr.writeheader()

    for i in range(135):
        url = "http://www.host.com/actors/all-actors/name/{}/".format(i)
        response = requests.get(url)
        html = response.content
        soup = BeautifulSoup(html, "html.parser")
        links = soup.find_all("div", { "class" : "card-image" })

        for a in links:
            for url in a.find_all('a'):
                url = "http://www.host.com" + url['href']
                print(url)
                response = requests.get(url)
                html = response.content
                actor_dict = get_actor_dict_from_html(url, html)
                wr.writerow(actor_dict)
    print('All Done!')

if __name__ == "__main__":
    lotta_downloads()

谢谢!

1 个答案:

答案 0 :(得分:0)

为什么不尝试使用gevent库?

gevent库有monkey patch对非阻塞函数进行阻塞功能。

可能wait time的请求太多而且太慢。

所以我认为将请求作为非阻塞功能可以使您的程序更快。

在python 2.7.10上 例如:

import gevent
from gevent import monkey; monkey.patch_all() # Fix import code
import reqeusts

actor_dict_list = []

def worker(url):
    content = requests.get(url).content
    bs4.BeautifulSoup(content)
    links = soup.find_all('div', {'class': 'card-image'})

    for a in links:
        for url in a.find_all('a'):
            response = requests.get(url) # You can also use gevent spawn function on this line
            ...
            actor_dict_list.append(get_actor_dict_from_html(url, html)) # Because of preventing race condition

output = open("/tmp/export.csv", "w", newline='')
wr = csv.DictWriter(output, ['Name', 'ImageUrl', 'Height', 'Weight'], delimiter=',')
wr.writeheader()

urls = ["http://www.host.com/actors/all-actors/name/{}/".format(i) for i in range(135)]
jobs = [gevent.spawn(worker, url) for url in urls]
gevent.joinall(jobs)
for i in actor_dict_list:
    wr.writerow(actor_dict)

公共gevent文档:doc

P.S。

你必须安装python-gevent如果你有ubuntu OS

sudo apt-get install python-gevent