为了抓取一个URL池,我用parall处理selenium。在这方面,我面临两个挑战:
continue
空结果(我知道这很可能是错误的)伪代码:
URL_list = [URL1, URL2, URL3, ..., URL100000] # List of URLs to be scraped
def scrape(URL):
while True: # Loop needed to use continue
try: # Try scraping
driver = webdriver.Firefox(executable_path=path) # Set up driver
website = driver.get(URL) # Get URL
results = do_something(website) # Get results from URL content
driver.close() # Close worker
if len(results) == 0: # If do_something() failed:
continue # THEN Worker to skip URL
else: # If do_something() worked:
safe_results("results.csv") # THEN Save results
break # Go to next worker/URL
except Exception as e: # If something weird happens:
save_exception(URL, e) # THEN Save error message
break # Go to next worker/URL
Parallel(n_jobs = 40)(delayed(scrape)(URL) for URL in URL_list))) # Run in 40 processes
我的理解是,为了在迭代中重用驱动程序实例,# Set up driver
- 行需要放在scrape(URL)
之外。但是,scrape(URL)
以外的所有内容都无法找到joblib的Parallel(n_jobs = 40)
。这意味着您在使用joblib进行抓取时无法重用驱动程序实例,这是不可能的。
Q1:如何在上面的示例中并行处理期间重用驱动程序实例?
Q2:如何在保持上述示例功能的同时摆脱while循环?
注意:在firefox_profile
中禁用了Flash和图片加载(代码未显示)
答案 0 :(得分:1)
1)您应该首先创建一堆驱动程序:每个进程一个。并将一个实例传递给工人。我不知道如何将驱动程序传递给Prallel对象,但您可以使用threading.current_thread().name
键来识别驱动程序。为此,请使用backend="threading"
。所以现在每个线程都有自己的驱动程序。
2)你根本不需要循环。并行对象本身就是你的所有网址(我希望我真的想要你使用循环的意图)
import threading
from joblib import Parallel, delayed
from selenium import webdriver
def scrape(URL):
try:
driver = drivers[threading.current_thread().name]
except KeyError:
drivers[threading.current_thread().name] = webdriver.Firefox()
driver = drivers[threading.current_thread().name]
driver.get(URL)
results = do_something(driver)
if results:
safe_results("results.csv")
drivers = {}
Parallel(n_jobs=-1, backend="threading")(delayed(scrape)(URL) for URL in URL_list)
for driver in drivers.values():
driver.quit()
但我并不认为你使用n_job比获得CPU更有利。所以n_jobs=-1
是最好的(当然我可能错了,试试吧)。