大家好!我写过小网页抓取功能。但我是多线程的新手,我无法优化它。我的代码是:
alreadySeenURLs = dict() #the dictionary of already seen crawlers
candidates = set() #the set of URL candidates to crawl
def initializeCandidates(url):
#gets page with urllib2
page = getPage(url)
#parses page with BeautifulSoup
parsedPage = getParsedPage(page)
#function which return all links from parsed page as set
initialURLsFromRoot = getLinksFromParsedPage(parsedPage)
return initialURLsFromRoot
def updateCandidates(oldCandidates, newCandidates):
return oldCandidates.union(newCandidates)
candidates = initializeCandidates(rootURL)
for url in candidates:
print len(candidates)
#fingerprint of URL
fp = hashlib.sha1(url).hexdigest()
#checking whether url is in alreadySeenURLs
if fp in alreadySeenURLs:
continue
alreadySeenURLs[fp] = url
#do some processing
print url
page = getPage(url)
parsedPage = getParsedPage(page, fix=True)
newCandidates = getLinksFromParsedPage(parsedPage)
candidates = updateCandidates(candidates, newCandidates)
有人可以看到,这里需要一个特定时间候选人的网址。我想以这样的方式使这个脚本多线程,它可以从候选人那里获得至少N个url并完成这项工作。谁能指导我?提供任何链接或建议?
答案 0 :(得分:1)
您可以从这两个链接开始:
Python中线程的基本参考 http://docs.python.org/library/threading.html
他们在python中实际实现多线程URL爬虫的教程 http://www.ibm.com/developerworks/aix/library/au-threadingpython/
此外,您已经拥有了python的爬虫:http://scrapy.org/