Linux允许python使用多少个网络端口?

时间:2015-05-08 17:59:51

标签: python linux multithreading python-2.7

所以我一直在尝试在python中多线程一些互联网连接。我一直在使用多处理模块,所以我可以绕过" Global Interpreter Lock"。但似乎系统只为python提供了一个开放的连接端口,或者至少它只允许一次连接发生。这是我所说的一个例子。

*请注意,这是在Linux服务器上运行

from multiprocessing import Process, Queue
import urllib
import random

# Generate 10,000 random urls to test and put them in the queue
queue = Queue()
for each in range(10000):
    rand_num = random.randint(1000,10000)
    url = ('http://www.' + str(rand_num) + '.com')
    queue.put(url)

# Main funtion for checking to see if generated url is active
def check(q):
    while True:
        try:
            url = q.get(False)
            try:
                request = urllib.urlopen(url)
                del request
                print url + ' is an active url!'
            except:
                print url + ' is not an active url!'
        except:
            if q.empty():
                break

# Then start all the threads (50)
for thread in range(50):
    task = Process(target=check, args=(queue,))
    task.start()

因此,如果你运行它,你会发现它在函数上启动了50个实例,但一次只运行一个。您可能认为全球口译员锁定了#39;正在做这件事,但事实并非如此。尝试将函数更改为数学函数而不是网络请求,您将看到所有50个线程同时运行。

那么我必须使用套接字吗?或者我能做些什么可以让python访问更多端口?或者有什么我没有看到的?让我知道你的想法!谢谢!

*编辑

所以我编写了这个脚本来更好地使用请求库进行测试。好像我之前没有对它进行过这样的测试。 (我主要使用urllib和urllib2)

from multiprocessing import Process, Queue
from threading import Thread
from Queue import Queue as Q
import requests
import time

# A main timestamp
main_time = time.time()

# Generate 100 urls to test and put them in the queue
queue = Queue()
for each in range(100):
    url = ('http://www.' + str(each) + '.com')
    queue.put(url)

# Timer queue
time_queue = Queue()

# Main funtion for checking to see if generated url is active
def check(q, t_q): # args are queue and time_queue
    while True:
        try:
            url = q.get(False)
            # Make a timestamp
            t = time.time()
            try:
                request = requests.head(url, timeout=5)
                t = time.time() - t
                t_q.put(t)
                del request
            except:
                t = time.time() - t
                t_q.put(t)
        except:
            break

# Then start all the threads (20)
thread_list = []
for thread in range(20):
    task = Process(target=check, args=(queue, time_queue))
    task.start()
    thread_list.append(task)

# Join all the threads so the main process don't quit
for each in thread_list:
    each.join()
main_time_end = time.time()

# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
    try:
        time_queue_list.append(time_queue.get(False))
    except:
        break

# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line =  "Multiprocessing: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line

# A main timestamp
main_time = time.time()

# Generate 100 urls to test and put them in the queue
queue = Q()
for each in range(100):
    url = ('http://www.' + str(each) + '.com')
    queue.put(url)

# Timer queue
time_queue = Queue()

# Main funtion for checking to see if generated url is active
def check(q, t_q): # args are queue and time_queue
    while True:
        try:
            url = q.get(False)
            # Make a timestamp
            t = time.time()
            try:
                request = requests.head(url, timeout=5)
                t = time.time() - t
                t_q.put(t)
                del request
            except:
                t = time.time() - t
                t_q.put(t)
        except:
            break

# Then start all the threads (20)
thread_list = []
for thread in range(20):
    task = Thread(target=check, args=(queue, time_queue))
    task.start()
    thread_list.append(task)

# Join all the threads so the main process don't quit
for each in thread_list:
    each.join()
main_time_end = time.time()

# Put the timerQueue into a list to get the average
time_queue_list = []
while True:
    try:
        time_queue_list.append(time_queue.get(False))
    except:
        break

# Results of the time
average_response = sum(time_queue_list) / float(len(time_queue_list))
total_time = main_time_end - main_time
line =  "Standard Threading: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line

# Do the same thing all over again but this time do each url at a time
# A main timestamp
main_time = time.time()

# Generate 100 urls and test them
timer_list = []
for each in range(100):
    url = ('http://www.' + str(each) + '.com')
    t = time.time()
    try:
        request = requests.head(url, timeout=5)
        timer_list.append(time.time() - t)
    except:
        timer_list.append(time.time() - t)
main_time_end = time.time()

# Results of the time
average_response = sum(timer_list) / float(len(timer_list))
total_time = main_time_end - main_time
line = "Not using threads: Average response time: %s sec. -- Total time: %s sec." % (average_response, total_time)
print line

正如您所看到的,它是多线程的。实际上,我的大部分测试表明,线程模块实际上比多处理模块更快。 (我不明白为什么!)以下是我的一些结果。

Multiprocessing: Average response time: 2.40511314869 sec. -- Total time: 25.6876308918 sec.
Standard Threading: Average response time: 2.2179402256 sec. -- Total time: 24.2941861153 sec.
Not using threads: Average response time: 2.1740363431 sec. -- Total time: 217.404567957 sec.

这是在我的家庭网络上完成的,我服务器上的响应时间要快得多。我认为我的问题是间接回答的,因为我在一个更复杂的脚本上遇到了问题。所有的建议都帮助我很好地优化了它。谢谢大家!

3 个答案:

答案 0 :(得分:1)

  

它在函数上启动50个实例,但一次只运行一个

你误解了htop的结果。只有少数(如果有的话)python副本可以在任何特定实例上运行。其中大多数将被阻止等待网络I / O.

事实上,这些流程并行运行。

  

尝试将函数更改为数学函数而不是网络请求,您将看到所有50个线程同时运行。

将任务更改为数学函数仅仅说明了CPU绑定(例如数学)和IO绑定(例如urlopen)进程之间的差异。前者总是可以运行,后者很少可以运行。

  

它一次只打印一张。如果它实际上正在运行多个进程,它将立即打印出许多进程。

它一次打印一个,因为您正在向终端写入行。由于这些行无法区分,因此您无法判断它们是由一个线程全部写入,还是由一个单独的线程依次写入。

答案 1 :(得分:0)

首先,使用multiprocessing并行化网络I / O是一种过度杀伤力。使用内置的threading或像gevent这样的轻量级greenlet库是一个更好的选择,而且开销更少。 GIL与阻止IO调用无关,因此您根本不必担心这一点。

其次,如果要监视stdout,可以通过一种简单的方法来查看你的子进程/线程/ greenlet是否并行运行,就是在函数的最开头打印出一些东西,就在产生子进程/ threads / greenlets之后。例如,像这样修改check()函数

def check(q):
    print 'Start checking urls!'
    while True:
        ...

如果您的代码是正确的,您应该在打印出任何Start checking urls!之前看到许多url + ' is [not] an active url!'行打印出来。它适用于我的机器,所以看起来你的代码是正确的。

答案 2 :(得分:0)

您的问题似乎与gethostbyname(3)的串行行为有关。这在this SO thread中进行了讨论。

尝试使用 Twisted 异步I / O库的代码:

import random
import sys
from twisted.internet import reactor
from twisted.internet import defer
from twisted.internet.task import cooperate
from twisted.web import client

SIMULTANEOUS_CONNECTIONS = 25
# Generate 10,000 random urls to test and put them in the queue
pages = []
for each in range(10000):
    rand_num = random.randint(1000,10000)
    url = ('http://www.' + str(rand_num) + '.com')
    pages.append(url)

# Main function for checking to see if generated url is active
def check(page):
    def successback(data, page):
        print "{} is an active URL!".format(page)

    def errback(err, page):
        print "{} is not an active URL!; errmsg:{}".format(page, err.value)

    d = client.getPage(page, timeout=3) # timeout in seconds
    d.addCallback(successback, page)
    d.addErrback(errback, page)
    return d

def generate_checks(pages):
    for i in xrange(0, len(pages)):
        page = pages[i]
        #print "Page no. {}".format(i)
        yield check(page)

def work(pages):
    print "started work(): {}".format(len(pages))
    batch_size = len(pages) / SIMULTANEOUS_CONNECTIONS
    for i in xrange(0, len(pages), batch_size):
        task = cooperate(generate_checks(pages[i:i+batch_size]))

print "starting..."
reactor.callWhenRunning(work, pages)
reactor.run()