我有点时间使用Python的urllib2来获取异步/线程HTTPS请求。
有没有人有一个基本的例子来实现urllib2.Request,urllib2.build_opener和urllib2.HTTPSHandler的子类?
谢谢!
答案 0 :(得分:10)
下面的代码同时异步执行7个http请求。 它不使用线程,而是使用与twisted库的异步网络。
from twisted.web import client
from twisted.internet import reactor, defer
urls = [
'http://www.python.org',
'http://stackoverflow.com',
'http://www.twistedmatrix.com',
'http://www.google.com',
'http://launchpad.net',
'http://github.com',
'http://bitbucket.org',
]
def finish(results):
for result in results:
print 'GOT PAGE', len(result), 'bytes'
reactor.stop()
waiting = [client.getPage(url) for url in urls]
defer.gatherResults(waiting).addCallback(finish)
reactor.run()
答案 1 :(得分:8)
这是一个非常简单的方法,涉及到urllib2的处理程序,你可以在这里找到它:http://pythonquirks.blogspot.co.uk/2009/12/asynchronous-http-request.html
#!/usr/bin/env python
import urllib2
import threading
class MyHandler(urllib2.HTTPHandler):
def http_response(self, req, response):
print "url: %s" % (response.geturl(),)
print "info: %s" % (response.info(),)
for l in response:
print l
return response
o = urllib2.build_opener(MyHandler())
t = threading.Thread(target=o.open, args=('http://www.google.com/',))
t.start()
print "I'm asynchronous!"
t.join()
print "I've ended!"
答案 2 :(得分:5)
这是使用urllib2(带https)和线程的示例。每个线程循环遍历URL列表并检索资源。
import itertools
import urllib2
from threading import Thread
THREADS = 2
URLS = (
'https://foo/bar',
'https://foo/baz',
)
def main():
for _ in range(THREADS):
t = Agent(URLS)
t.start()
class Agent(Thread):
def __init__(self, urls):
Thread.__init__(self)
self.urls = urls
def run(self):
urls = itertools.cycle(self.urls)
while True:
data = urllib2.urlopen(urls.next()).read()
if __name__ == '__main__':
main()
答案 3 :(得分:1)
您可以使用异步IO来执行此操作。
GRequests允许您使用带有Gevent的请求轻松地发出异步HTTP请求。
import grequests
urls = [
'http://www.heroku.com',
'http://tablib.org',
'http://httpbin.org',
'http://python-requests.org',
'http://kennethreitz.com'
]
rs = (grequests.get(u) for u in urls)
grequests.map(rs)
答案 4 :(得分:0)
这是来自eventlet
的代码urls = ["http://www.google.com/intl/en_ALL/images/logo.gif",
"https://wiki.secondlife.com/w/images/secondlife.jpg",
"http://us.i1.yimg.com/us.yimg.com/i/ww/beta/y3.gif"]
import eventlet
from eventlet.green import urllib2
def fetch(url):
return urllib2.urlopen(url).read()
pool = eventlet.GreenPool()
for body in pool.imap(fetch, urls):
print "got body", len(body)