到目前为止,我的主要问题是从我的getLinkHTML或getImgHeader errback追溯到大量“用户超时导致连接失败”。我已经尝试使用信号量限制我使用的连接数量,甚至导致我的一些代码睡眠无效,以为我正在淹没连接。我还认为这个问题可能来自reactor.connectTCP,因为在运行剪贴板大约30秒后会产生超时错误,而connectTCP有30秒超时。但是,我将来自扭曲模块的connectTCP代码模式设置为60秒,并且在运行后大约30秒仍然出现超时错误。当然,用我传统的螺纹刮刀去除相同的网站工作得很好,速度要快得多。
那么我做错了什么?另外,请随意批评我的代码,因为我是自学成才,我在整个代码中也有一些随机问题。任何建议都非常感谢!
from twisted.internet import defer
from twisted.internet import reactor
from twisted.web import client
from lxml import html
from StringIO import StringIO
from os import path
import re
start_url = "http://www.thesupermodelsgallery.com/"
directory = "/home/z0e/Pictures/Pix/Twisted"
min_img_size = 100000
#maximum <a> links to get from main gallery
max_gallery_links = 500
#maximum <a> links to get from subsequent gallery/pages
max_picture_links = 35
def parsePage(info):
def linkFilter(link):
#filter unwanted <a> links
if link is not None:
trade_match = re.search(r'&trade=', link)
href_split = link.split('=')
for i in range(len(href_split)):
if 'www' in href_split[i] and i > 0:
link = href_split[i]
end_pattern = r'\.(com|com/|net|net/|pro|pro/)$'
end_match = re.search(end_pattern, link)
p_pattern = r'(.*)&p'
p_match = re.search(p_pattern, link)
if end_match or trade_match:
return None
elif p_match:
link = p_match.group(1)
return link
else:
return link
else:
return None
# better to handle a link with 'None' value through TypeError
# exception or through if else statements? Compare linkFilter
# vs. imgFilter functions
def imgFilter(link):
#filter <img> links to retain only .jpg
try:
jpg_match = re.search(r'.jpg', link)
if jpg_match is not None:
return link
else:
return None
except TypeError:
return None
link_num = 0
gallery_flag = None
info['level'] += 1
if info['page'] is '':
return None
# use lxml to parse and get document root
tree = html.parse(StringIO(info['page']))
root = tree.getroot()
root.make_links_absolute(info['url'])
# info['level'] = 1 corresponds to first recursive layer (i.e. main gallery page)
# info['level'] > 1 will be all other <a> links from main gallery page
if info['level'] == 1:
link_cap = max_gallery_links
gallery_flag = True
else:
link_cap = max_picture_links
gallery_flag = False
if info['level'] > 4:
return None
else:
# get <img> links if page is not main gallery ('gallery_flag = False')
# put <img> links back into main event loop to extract header information
# to judge pictures by picture size (i.e. content-length)
if not gallery_flag:
for elem in root.iter('img'):
# create copy of info so that dictionary no longer points to
# previous dictionary, but new dictionary for each link
info = info.copy()
info['url'] = imgFilter(elem.get('src'))
if info['url'] is not None:
reactor.callFromThread(getImgHeader, info)
# get <a> link and put work back into main event loop (i.e. w/
# reactor.callFromThread...) to getPage and then parse, continuing the
# cycle of linking
for elem in root.iter('a'):
if link_num > link_cap:
break
else:
img = elem.find('img')
if img is not None:
link_num += 1
info = info.copy()
info['url'] = linkFilter(elem.get('href'))
if info['url'] is not None:
reactor.callFromThread(getLinkHTML, info)
def getLinkHTML(info):
# get html from <a> link and then send page to be parsed in a thread
d = client.getPage(info['url'])
d.addCallback(parseThread, info)
d.addErrback(failure, "getLink Failure: " + info['url'])
def parseThread(page, info):
print 'parsethread:', info['url']
info['page'] = page
reactor.callInThread(parsePage, info)
def getImgHeader(info):
# get <img> header information to filter images by image size
agent = client.Agent(reactor)
d = agent.request('HEAD', info['url'], None, None)
d.addCallback(getImg, info)
d.addErrback(failure, "getImgHeader Failure: " + info['url'])
def getImg(img_header, info):
# download image only if image is above a certain threshold size
img_size = img_header.headers.getRawHeaders('Content-Length')
if int(img_size[0]) > min_img_size and img_size is not None:
img_name = ''.join(map(urlToName, info['url']))
client.downloadPage(info['url'], path.join(directory, img_name))
else:
img_header, link = None, None #Does this help garbage collecting?
def urlToName(char):
#convert all unwanted characters to '-' from url and use as file name
if char in '/\?|<>"':
return '-'
else:
return char
def failure(error, url):
print error
print url
def main():
info = dict()
info['url'] = start_url
info['level'] = 0
reactor.callWhenRunning(getLinkHTML, info)
reactor.suggestThreadPoolSize(2)
reactor.run()
if __name__ == "__main__":
main()
答案 0 :(得分:2)
首先,考虑不要编写此代码。请查看scrapy以解决您的需求。人们已经努力使其表现良好,如果确实需要改进,那么当你改进它时,社区中的每个人都将受益。
接下来,代码清单中的缩进很不幸搞砸了,这使得很难真正看到代码在做什么。希望以下内容有意义,但您应该尝试更正代码清单,以便准确反映您正在做的事情,并确保在将来的问题中仔细检查代码清单。
至于你的代码正在做什么阻止它快速,这里有一些想法。
程序中未完成的HTTP请求数量没有限制。在不知道你实际正在解析什么HTML的情况下,我不知道这实际上是否是一个问题,但如果你最终一次发出超过20或30个HTTP请求,那很可能会让你的网络过载。使用TCP,这通常意味着连接设置将不会成功(某些设置数据包会丢失,并且对它们将被重试的次数有限制)。由于您提到了很多连接超时错误,我怀疑这种情况正在发生。
考虑程序的线程版本一次会发出多少HTTP请求。 Twisted版本可能会发行更多吗?如果是这样,请尝试对此施加限制。像twisted.internet.defer.DeferredSemaphore
这样的东西可能是一种简单的方法来强加这个限制(尽管它远非最佳方式,所以如果它有帮助那么你可能想要开始寻找更好的方法来施加这个限制 - 但如果限制没有帮助那么就没有必要在更好的限制机制上投入大量精力)。
接下来,通过将反应堆线程池限制为最多2个线程,您严重妨碍了解析名称的能力。默认情况下,使用reactor线程池完成名称解析(即DNS)。你有两个选择。我假设你有一个很好的理由想要将解析限制为两个并发线程。
首先,您可以单独留下reactor线程池并创建自己的线程池进行解析。见twisted.python.threads.ThreadPool
。您可以将此另一个线程池的最大值设置为2以获得所需的解析行为,并且reactor可以自由地使用它想要的名称解析线程。
其次,您可以继续降低反应器线程池大小,并将反应器配置为不使用线程进行名称解析。 twisted.names.client.createResolver
将为您提供一个名称解析器,并且reactor.installResolver
允许您告诉反应堆使用它而不是默认值。