如何在BeautifulSoup WebScraper中实现多处理

时间:2020-01-23 14:13:19

标签: python web-scraping beautifulsoup

我使用Python和BeautifulSoup lib制作了一个Web抓取工具,它工作正常,唯一的问题是它非常慢。所以现在,我想实现一些多处理,以便加快速度,但是我不知道如何。

我的代码来自两个标准杆。 第一部分是抓取网站,以便我可以生成要进一步抓取的网址,并将这些网址附加在列表中。第一部分看起来像这样:

from bs4 import BeautifulSoup
import requests
from datetime import date, timedelta
from multiprocessing import Pool

headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}

links = [["Cross-Country", "https://www.fis-ski.com/DB/cross-country/cup-standings.html", "?sectorcode=CC&seasoncode={}&cupcode={}&disciplinecode=ALL&gendercode={}&nationcode="],
         ["Ski Jumping", "https://www.fis-ski.com/DB/ski-jumping/cup-standings.html", ""],
         ["Nordic Combined", "https://www.fis-ski.com/DB/nordic-combined/cup-standings.html", ""],
         ["Alpine", "https://www.fis-ski.com/DB/alpine-skiing/cup-standings.html", ""]]

# FOR LOOP FOR GENERATING URLS FOR SCRAPPING

all_urls = []
for link in links[:1]:

    response = requests.get(link[1], headers = headers)
    soup = BeautifulSoup(response.text, 'html.parser')

    discipline = link[0]
    print(discipline)

    season_list = []
    competition_list = []
    gender_list = ["M", "L"]


    all_seasons = soup.find_all("div", class_ = "select select_size_medium")[0].find_all("option")
    for season in all_seasons:
        season_list.append(season.text)

    all_competitions = soup.find_all("div", class_ = "select select_size_medium")[1].find_all("option")
    for competition in all_competitions:
        competition_list.append([competition["value"], competition.text])


    for gender in gender_list:
        for competition in competition_list[:1]:
            for season in season_list[:2]:

                url = link[1] + link[2].format(season, competition[0], gender)
                all_urls.append([discipline, season, competition[1], gender, url])

                print(discipline, season, competition[1], gender, url)
                print()

print(len(all_urls))   

您的第一部分生成了4500多个链接,但是我添加了一些索引限制,因此它仅生成8个链接。这是代码的第二部分,它的功能基本上是一个for循环,逐个url并抓取特定数据。第二部分:

# FUNCTION FOR SCRAPPING
def parse():
    for url in all_urls:

        response = requests.get(url[4], headers = headers)
        soup = BeautifulSoup(response.text, 'html.parser')

        all_skier_names = soup.find_all("div", class_ = "g-xs-10 g-sm-9 g-md-4 g-lg-4 justify-left bold align-xs-top")
        all_countries = soup.find_all("span", class_ = "country__name-short")


        discipline = url[0]
        season = url[1]
        competition = url[2]
        gender = url[3]


        for name, country in zip(all_skier_names , all_countries):

            skier_name = name.text.strip().title()
            country = country.text.strip()

            print(discipline, "|", season, "|", competition, "|", gender, "|", country, "|", skier_name)

        print()

parse() 

我已阅读一些文档,我的多处理部分应如下所示:

p = Pool(10)  # Pool tells how many at a time
records = p.map(parse, all_urls)
p.terminate()
p.join()  

但是我跑了这个,我等了30分钟,什么也没发生。 我在做什么错,我该如何使用缓冲池实现多重处理,以便我可以同时抓取10个或更多的URL?

2 个答案:

答案 0 :(得分:1)

这是使用multiprocessing.Pool的简单实现。注意,我使用了tqdm模块来显示漂亮的进度条(查看长时间运行的程序的最新进度很有用):

from bs4 import BeautifulSoup
import requests
from datetime import date, timedelta
from multiprocessing import Pool
import tqdm

headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}

def parse(url):
    response = requests.get(url[4], headers = headers)
    soup = BeautifulSoup(response.text, 'html.parser')

    all_skier_names = soup.find_all("div", class_ = "g-xs-10 g-sm-9 g-md-4 g-lg-4 justify-left bold align-xs-top")
    all_countries = soup.find_all("span", class_ = "country__name-short")

    discipline = url[0]
    season = url[1]
    competition = url[2]
    gender = url[3]

    out = []
    for name, country in zip(all_skier_names , all_countries):
        skier_name = name.text.strip().title()
        country = country.text.strip()
        out.append([discipline, season,  competition,  gender,  country,  skier_name])

    return out

# here I hard-coded all_urls:
all_urls = [['Cross-Country', '2020', 'World Cup', 'M', 'https://www.fis-ski.com/DB/cross-country/cup-standings.html?sectorcode=CC&seasoncode=2020&cupcode=WC&disciplinecode=ALL&gendercode=M&nationcode='], ['Cross-Country', '2020', 'World Cup', 'L', 'https://www.fis-ski.com/DB/cross-country/cup-standings.html?sectorcode=CC&seasoncode=2020&cupcode=WC&disciplinecode=ALL&gendercode=L&nationcode='], ['Ski Jumping', '2020', 'World Cup', 'M', 'https://www.fis-ski.com/DB/ski-jumping/cup-standings.html'], ['Ski Jumping', '2020', 'World Cup', 'L', 'https://www.fis-ski.com/DB/ski-jumping/cup-standings.html'], ['Nordic Combined', '2020', 'World Cup', 'M', 'https://www.fis-ski.com/DB/nordic-combined/cup-standings.html'], ['Nordic Combined', '2020', 'World Cup', 'L', 'https://www.fis-ski.com/DB/nordic-combined/cup-standings.html'], ['Alpine', '2020', 'World Cup', 'M', 'https://www.fis-ski.com/DB/alpine-skiing/cup-standings.html'], ['Alpine', '2020', 'World Cup', 'L', 'https://www.fis-ski.com/DB/alpine-skiing/cup-standings.html']]

with Pool(processes=2) as pool, tqdm.tqdm(total=len(all_urls)) as pbar: # create Pool of processes (only 2 in this example) and tqdm Progress bar
    all_data = []                                                       # into this list I will store the urls returned from parse() function
    for data in pool.imap_unordered(parse, all_urls):                   # send urls from all_urls list to parse() function (it will be done concurently in process pool). The results returned will be unordered (returned when they are available, without waiting for other processes)
        all_data.extend(data)                                           # update all_data list
        pbar.update()                                                   # update progress bar

# Note:
# this for-loop will have 8 iterations (because all_urls has 8 links)

# print(all_data) # <-- this is your data

答案 1 :(得分:0)

@ andrej-kesely发布的代码在Idle中工作正常。确保代码在应该放置的位置具有适当的间距