我已经在这个网站上呆了一段时间,我找到了很多有用的解决方案来解决我在构建我的第一个python程序时遇到的问题。我希望你们能再次帮助我。
我正在尝试启动可变数量的多进程,每个进程都会占用一小块列表进行扫描。我一直在修补队列,但是当我实现它们时,它们总是会给我的循环增加相当多的时间。我希望最大化我的速度,同时保护我的Titles.txt免受错误的内容。让我告诉你我的代码。
l= ['url1', 'url2', etc]
def output(t):
f = open('Titles.txt','a')
f.write(t)
f.close()
def job(y,processload):
calender = ['Jan', 'Feb', 'Mar', 'Dec'] #the things i want to find
for i in range(processload): #looping processload times
source = urllib.request.urlopen(l[y]).read() #read url #y
soup = bs.BeautifulSoup(source,'lxml')
for t in soup.html.head.find_all('title'):
if any(word in t for word in calender):
output(t) #this what i need to queue
y+=1 #advance url by 1
if __name__ == '__main__':
processload=5 #the number of urls to be scanned by job
y=0 #the specific count of url in list
runcount = 0
while runcount == 0: #engage loop
for i in range(380/processload): #the list size / 5
p= multiprocessing.Process(target=job, args=(y,processload)
p.start()
y+=processload #jump y ahead
上面的代码允许循环中的最大速度。我想保持速度,同时保护我的输出。我一直在搜索示例,但我还没有找到具有在子进程中启动的锁或队列的代码。你会怎么推荐我继续?
非常感谢。
答案 0 :(得分:0)
此示例代码执行我认为您希望程序执行的操作:
import multiprocessing as mp
import time
import random
# Slicing a list into sublists from SilentGhost
# https://stackoverflow.com/a/2231685/4834
def get_chunks(input_list, chunk_size):
return [input_list[i:i+chunk_size] for i in range(0, len(input_list), chunk_size)]
def find_all(item):
''' Dummy generator to simulate fetching a page and returning interesting stuff '''
secs = random.randint(1,5)
time.sleep(secs)
# Just one yield here, but could yield each item found
yield item
def output(q):
''' Dummy sink which prints instead of writing to a file '''
while True:
item = q.get()
if item is None:
return
print(item)
def job(chunk, q):
for item in chunk:
for t in find_all(item):
q.put(t)
print('Job done:', chunk)
if __name__ == '__main__':
all_urls = ['url1', 'url2', 'url3', 'url4', 'url5', 'url6']
chunks = get_chunks(all_urls, 2)
q = mp.Queue()
# Create processes, each taking a chunk and the queue
processes = [mp.Process(target=job, args=(chunk,q)) for chunk in chunks]
# Start them all
for p in processes:
p.start()
# Create and start the sink
sink = mp.Process(target=output, args=(q,))
sink.start()
# Wait for all the jobs to finish
for p in processes:
p.join()
# Signal the end with None
q.put(None)
sink.join()
示例输出:
url3
Job done: ['url3', 'url4']
url4
url5
url1
Job done: ['url5', 'url6']
url6
Job done: ['url1', 'url2']
url2