我制作了一个脚本,允许我从网站上删除动态内容。对于这个脚本,我做了一个几乎无限循环,它抓取数据并将它们保存在csv文本文件中,以便稍后进行操作。虽然我的大部分计算都不是必须的,但对我来说,进行一些快速实时计算仍然非常有用。我的问题是我不想通过在脚本中进行计算来减慢脚本速度,并在数据的同时将它们添加到文本文件中。为什么在不减慢我的抓取脚本的情况下管理小型实时计算的最佳方法是什么? 谢谢!
答案 0 :(得分:1)
您可以为写入csv文件的每一行启动一个新线程。我的示例使用youtube并擦除播放列表,将链接写入csv文件以及启动一个线程,您可以在其中对每个链接执行某些操作。
import re
from selenium import webdriver
from bs4 import BeautifulSoup
import requests
from threading import Thread
from time import sleep
import csv
def threaded_function(arg):
print arg # do something with link
sleep(1)
#Asks which playlist you want downloaded
print ('Which playlist do you want to download?')
playlist = raw_input()
#Access my youtube playlists page
driver = webdriver.Chrome(executable_path='/usr/lib/chromium-browser/chromedriver')
driver.get("https://www.youtube.com/user/randomuser/playlists?sort=dd&view=1&shelf_id=0")
#Access the 'Favorites' playlist
if playlist == 'Favorites':
driver.find_element_by_xpath('//a[contains(text(), "Favorites")]').click()
newurl = driver.current_url
requrl = requests.get(newurl)
requrlcont = requrl.content
links = []
soup = BeautifulSoup(requrlcont, "html.parser")
for link in soup.find_all('a'):
#print("link " + str(link))
if re.match("/watch\?v=", link.get('href')):
links.append(link.get('href'))
writer = csv.writer(open("output.csv", 'w'))
writer.writerow([link])
thread = Thread(target=threaded_function, args=(link, ))
thread.start()
thread.join()
print "thread finished...exiting"
print links
测试
python pyprog.py
Which playlist do you want to download?
Favorites
...