url打开过程下的多线程

时间:2014-03-08 09:47:31

标签: python multithreading multiprocess

我编辑了一个脚本,检查网址是否需要进行WWW网站基本身份验证,并在此脚本中为用户打印结果:

#!/usr/bin/python

# Importing libraries
from urllib2 import urlopen, HTTPError
import socket
import urllib2
import threading
import time

# Setting up variables
url = open("oo.txt",'r')
response = None
start = time.time()

# Excuting Coommands
start = time.time()
for line in url:
    try:
        response = urlopen(line, timeout=1)
    except HTTPError as exc:
        # A 401 unauthorized will raise an exception
        response = exc
    except socket.timeout:
        print ("{0} | Request timed out !!".format(line))
    except urllib2.URLError:
        print ("{0} | Access error !!".format(line))

    auth = response and response.info().getheader('WWW-Authenticate')
    if auth and auth.lower().startswith('basic'):
        print "requires basic authentication"
    elif socket.timeout or urllib2.URLError:
        print "Yay"
    else:
        print "Not requires basic authentication"

print "Elapsed Time: %s" % (time.time() - start)

我有一些小东西需要你的帮助,脚本在这里编辑它.. 我希望脚本一起检查每10个URL,并在文本文件中一次性给出所有URL的结果。我阅读了有关多线程和处理的内容,但我没有找到匹配表格,以简化代码给我。

当我出现超时或网址错误时,结果出现问题,脚本会将结果分为两行:

http://www.test.test
 | Access error !!

我想把它放在一行,为什么它会显示出来?

在这个问题上有任何帮助吗?

提前致谢

1 个答案:

答案 0 :(得分:1)

concurrent.futures 提供了一些功能,这使得在Python中使用并发变得非常容易。您定义了应为每个URL调用的函数check_url。然后,您可以使用map函数将函数并行应用于每个URL,并迭代返回值。

#! /usr/bin/env python3

import concurrent.futures
import urllib.error
import urllib.request
import socket

def load_urls(pathname):
    with open(pathname, 'r') as f:
        return [ line.rstrip('\n') for line in f ]

class BasicAuth(Exception): pass

class CheckBasicAuthHandler(urllib.request.BaseHandler):
    def http_error_401(self, req, fp, code, msg, hdrs):
        if hdrs.get('WWW-Authenticate', '').lower().startswith('basic'):
            raise BasicAuth()
        return None

def check_url(url):
    try:
        opener = urllib.request.build_opener(CheckBasicAuthHandler())
        with opener.open(url, timeout=1) as u:
            return 'requires no authentication'
    except BasicAuth:
        return 'requires basic authentication'
    except socket.timeout:
        return 'request timed out'
    except urllib.error.URLError as e:
        return 'access error ({!r})'.format(e.reason)

if __name__ == '__main__':
    urls = load_urls('/tmp/urls.txt')
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        for url, result in zip(urls, executor.map(check_url, urls)):
            print('{}: {}'.format(url, result))