避免重复结果多线程Python

时间:2016-08-16 16:20:34

标签: python multithreading


我试图让我的实际爬虫多线程 当我设置多线程时,将启动该函数的几个实例。

例如:

如果我使用print range(5)我的函数,如果我有2个帖子,我将1,1,2,2,3,3,4,4,5,5

如何在多线程中获得结果1,2,3,4,5

我的实际代码是一个抓取工具,您可以在下面看到:

import requests
from bs4 import BeautifulSoup

def trade_spider(max_pages):
    page = 1
    while page <= max_pages:
        url = "http://stackoverflow.com/questions?page=" + str(page)
        source_code = requests.get(url)
        plain_text = source_code.text
        soup = BeautifulSoup(plain_text, "html.parser")
        for link in soup.findAll('a', {'class': 'question-hyperlink'}):
            href = link.get('href')
            title = link.string
            print(title)
            get_single_item_data("http://stackoverflow.com/" + href)
        page += 1

def get_single_item_data(item_url):
    source_code = requests.get(item_url)
    plain_text = source_code.text
    soup = BeautifulSoup(plain_text, "html.parser")
    res = soup.find('span', {'class': 'vote-count-post '})
    print("UpVote : " + res.string)

trade_spider(1)

如何在没有重复链接的情况下在多线程中调用trade_spider()

2 个答案:

答案 0 :(得分:1)

将页码作为trade_spider函数的参数。

使用不同的页码调用每个进程中的函数,以便每个线程都获得一个唯一的页面。

例如:

import multiprocessing

def trade_spider(page):
    url = "http://stackoverflow.com/questions?page=%s" % (page,)
    source_code = requests.get(url)
    plain_text = source_code.text
    soup = BeautifulSoup(plain_text, "html.parser")
    for link in soup.findAll('a', {'class': 'question-hyperlink'}):
        href = link.get('href')
        title = link.string
        print(title)
        get_single_item_data("http://stackoverflow.com/" + href)

# Pool of 10 processes
max_pages = 100
num_pages = range(1, max_pages)
pool = multiprocessing.Pool(10)
# Run and wait for completion.
# pool.map returns results from the trade_spider
# function call but that returns nothing
# so ignoring it
pool.map(trade_spider, num_pages)

答案 1 :(得分:1)

试试这个:

from multiprocessing import Process, Value
import time

max_pages = 100
shared_page = Value('i', 1)
arg_list = (max_pages, shared_page)
process_list = list()
for x in range(2):
    spider_process = Process(target=trade_spider, args=arg_list)
    spider_process.daemon = True
    spider_process.start()
    process_list.append(spider_process)
for spider_process in process_list:
    while spider_process.is_alive():
        time.sleep(1.0)
    spider_process.join()

trade_spider的参数列表更改为

def trade_spider(max_pages, page)

并删除

    page = 1

这将创建两个进程,通过共享page值来完成页面列表。