在__init__上启动新进程(对于TCP侦听器 - 服务器)

时间:2014-12-22 22:55:40

标签: python tcp multiprocessing server

我正在尝试为每个新的类Server实例运行新进程。每个Server实例都应该侦听特定端口。到目前为止,我有这个(简化的)代码:source

class Server(object):

    def handle(connection, address):

        print("OK...connected...")
        try:
            while True:
                data = connection.recv(1024)
                if data == "":
                    break
                connection.sendall(data)
        except Exception as e:
           print(e)
        finally:
            connection.close()

    def __init__(self, port, ip):

        self.port = port
        self.ip = ip
        self.socket = socket(AF_INET, SOCK_STREAM)
        self.socket.bind((self.ip, self.port))
        self.socket.listen(1)

        while True:
            print("Listening...")
            conn, address = self.socket.accept()
            process = multiprocessing.Process(target=Pmu.handle, args=(conn, address))
            process.daemon = True
            process.start()

s1 = Server(9001,"127.0.0.1")
s2 = Server(9002,"127.0.0.1")

但是当我运行此脚本时,只有第一台服务器 s1 正在运行并等待连接。如何让两台服务器同时收听?

1 个答案:

答案 0 :(得分:1)

您当前的服务器实际上是SocketServer.ForkingTCPServer,它在__init__中进入紧密循环,接受新连接,并为每个传入连接创建新的子进程。

问题是__init__永远不会返回,因此只有一个服务器被实例化,一个套接字被绑定,只有一个端口接受新请求。

解决此类问题的常用方法是将accept循环移动到工作线程中。这段代码看起来像这样:

import multiprocessing
import threading
import socket

class Server(object):

    def handle(self, connection, address):
        print("OK...connected...")
        try:
            while True:
                data = connection.recv(1024)
                if data == "":
                    break
                connection.sendall(data)
        except Exception as e:
           print(e)
        finally:
            connection.close()
            print("Connection closed")

    def accept_forever(self):
        while True:
            # Accept a connection on the bound socket and fork a child process
            # to handle it.
            print("Waiting for connection...")
            conn, address = self.socket.accept()
            process = multiprocessing.Process(
                target=self.handle, args=(conn, address))
            process.daemon = True
            process.start()

            # Close the connection fd in the parent, since the child process
            # has its own reference.
            conn.close()

    def __init__(self, port, ip):
        self.port = port
        self.ip = ip
        self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.socket.bind((self.ip, self.port))
        self.socket.listen(1)

        # Spin up an acceptor thread
        self.worker = threading.Thread(target=self.accept_forever)
        self.worker.daemon = True
        self.worker.start()

    def join(self):
        # threading.Thread.join() is not interruptible, so tight loop
        # in a sleep-based join
        while self.worker.is_alive():
            self.worker.join(0.5)

# Create two servers that run in the background
s1 = Server(9001,"127.0.0.1")
s2 = Server(9002,"127.0.0.1")

# Wait for servers to shutdown
s1.join()
s2.join()

请注意我在这里偷偷摸摸的另一个变化:

# Wait for servers to shutdown
s1.join()
s2.join()

使用保存的对Server的接受工作者的引用,我们从主线程调用.join()以强制在服务器运行时阻止事件。如果没有这个,由于设置了工作人员.daemon属性,您的主程序几乎会立即退出。

值得注意的是,这种方法会有一些怪癖:

  1. 由于处理函数在不同的进程中运行,因此如果它们相互依赖,则需要使用Queue,Value,Pipe和其他multiprocessing构造仔细共享它们之间的数据结构。

  2. 主动并发连接没有速率限制;为每个请求创建一个新流程可能会很昂贵,并且可以为您的服务创建一个容易进行DoSed的向量。