Nginx容器中轮循负载的不均匀分布

时间:2019-03-27 13:00:28

标签: nginx containers load-balancing

在容器内运行时,Nginx负载平衡(循环)是不均匀的;在直接在VM内部运行nginx时,它是均匀的。

我正在https://github.com/grpc/grpc/tree/v1.19.0/examples/python/route_guide上试用gRPC流应用程序示例。

我已修改服务器以启动4个grpc服务器进程,并相应地修改了客户端以生成4个不同进程以命中该服务器。我仅从客户端调用RouteChat注释掉了其余功能。 客户端和服务器在Google云上的不同VM上运行。

route_guide_client.py

...
def run():
    # NOTE(gRPC Python Team): .close() is possible on a channel and should be
    # used in circumstances in which the with statement does not fit the needs
    # of the code.
    with grpc.insecure_channel('10.128.15.199:9999') as channel:
        stub = route_guide_pb2_grpc.RouteGuideStub(channel)
        #print("-------------- GetFeature --------------")
        #guide_get_feature(stub)
        #print("-------------- ListFeatures --------------")
        #guide_list_features(stub)
        #print("-------------- RecordRoute --------------")
        #guide_record_route(stub)
        print("-------------- RouteChat --------------")
        guide_route_chat(stub)


if __name__ == '__main__':
    logging.basicConfig()
    procs = 4
    for i in range(procs):
        p = multiprocessing.Process(target=run)
        p.start()
....

route_guide_server.py

   ........
    def serve(port,i):
        server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
        route_guide_pb2_grpc.add_RouteGuideServicer_to_server(
            RouteGuideServicer(), server)
        procs=4
        os.system("taskset -p -c %d %d" % (i % procs, os.getpid()))
        server.add_insecure_port('[::]:%d' % port)
        print("Server starting in port "+str(port)+" with cpu "+ str(i%procs))
        server.start()
        try:
            while True:
                time.sleep(_ONE_DAY_IN_SECONDS)
        except KeyboardInterrupt:
            server.stop(0)

    if __name__ == '__main__':
        logging.basicConfig()
        ports = []
        for i in range(9000,9004):
            ports.append(str(i))
        port_pool = cycle(ports)
        procs = 4
        for i in range(procs):


    p = multiprocessing.Process(target=serve,args=(int(next(port_pool)),i))
        p.start()
.........

Nginx.conf

user www-data;
worker_processes 1;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
worker_connections 768;
# multi_accept on;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent"';

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend{
server localhost:9000 weight=1;
server localhost:9001 weight=1;
server localhost:9002 weight=1;
server localhost:9003 weight=1;
}

server {
listen 6666 http2; <<<<<<<<< On container I have set it to 9999

access_log /tmp/access.log main;
error_log /tmp/error.log error;

proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $http_host;

location / {
grpc_pass grpc://backend;
}

}
} 

方案1:服务器在VM中运行

$ python route_guide_server.py
pid 14287's current affinity list: 0-5
pid 14287's new affinity list: 2
pid 14288's current affinity list: 0-5
pid 14288's new affinity list: 3
pid 14286's current affinity list: 0-5
pid 14286's new affinity list: 1
pid 14285's current affinity list: 0-5
pid 14285's new affinity list: 0
Server starting in port 9003 with cpu 3
Server starting in port 9002 with cpu 2
Server starting in port 9001 with cpu 1
Server starting in port 9000 with cpu 0

现在,我在其他VM上运行客户端。

$ python3 route_guide_client.py
...........
.......

在服务器上,我们看到请求在不同端口上运行的所有4个服务器进程之间均匀分配。例如,上述客户端调用在服务器上的输出是

Serving route chat request using 14285 << These are PIDs of processes that are bound to different server ports.
Serving route chat request using 14286
Serving route chat request using 14287
Serving route chat request using 14288

方案2:服务器在容器中运行

我现在在服务器VM上旋转一个容器,以相同的方式在容器内安装和配置nginx,并使用相同的nginx配置文件,除了nginx服务器监听端口。

$ sudo docker run -p 9999:9999 --cpus=4 grpcnginx:latest
...............

root@b81bb72fcab2:/# python3 route_guide_server.py
pid 71's current affinity list: 0-5
pid 71's new affinity list: 0
Server starting in port 9000 with cpu 0
pid 74's current affinity list: 0-5
pid 74's new affinity list: 3
Server starting in port 9003 with cpu 3
pid 72's current affinity list: 0-5
pid 72's new affinity list: 1
pid 73's current affinity list: 0-5
pid 73's new affinity list: 2
Server starting in port 9001 with cpu 1
Server starting in port 9002 with cpu 2

在客户端VM上

$ python3 route_guide_client.py
............
..............

现在在服务器上,我们看到请求仅由2个端口/进程提供服务。

Serving route chat request using 71
Serving route chat request using 72
Serving route chat request using 71
Serving route chat request using 72

请求帮助以解决容器内部的负载分配问题。

0 个答案:

没有答案