带有Selenium的Python多处理/池-创建WebDriver的多个副本

时间:2019-02-28 01:20:42

标签: python selenium web-scraping python-multiprocessing

我必须抓取一些网站,并希望加快这一过程。我正在尝试使用Multiprocessing来分散工作,以便同时抓取两个网站。

from multiprocessing import Process, current_process, Pool
from selenium import webdriver
import os
from functools import partial

def pool_function(url, browser):
    name = current_process().name
    print('Process name: %s') % (name)
    print(url)
    print('')
    browser.get(url)



if __name__ == '__main__':

    list_of_urls = ['http://www.linkedin.com', 'http://www.amazon.com', 'http://www.uber.com', 'http://www.facebook.com']
    p = Pool(processes=2)
    browser = webdriver.Chrome()
    test = partial(pool_function, browser)
    results = p.map(test, list_of_urls)

    p.close()
    p.join()

    print('all done')

这给我一个错误,提示browser对象不能被腌制。

Traceback (most recent call last):
  File "/Users/morganallen/Desktop/multi_processing.py", line 43, in <module>
    results = p.map(test, list_of_urls)
  File "/anaconda2/lib/python2.7/multiprocessing/pool.py", line 253, in map
    return self.map_async(func, iterable, chunksize).get()
  File "/anaconda2/lib/python2.7/multiprocessing/pool.py", line 572, in get
    raise self._value
cPickle.PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

创建browser实例,然后在可以向其提供URL的各个进程中弹出版本会怎样?

更新:

使用pool.apply_async进行尝试,但仍然无法使浏览器对象腌制。

def browser():
    browser = webdriver.Chrome()
    return browser

# print('all done')
if __name__ == '__main__':

    list_of_urls = ['http://www.linkedin.com', 'http://www.amazon.com', 'http://www.uber.com', 'http://www.facebook.com']

    p = Pool(processes=2)
    y = browser()
    results = [p.apply_async(cube, args=(url, y)) for url in list_of_urls]
    print(results)
    output = [p.get() for p in results]

0 个答案:

没有答案