据我所知,多个node.js,我通过扩展Meteor来假设,可以使用Nginx在一台服务器上运行。我已经安装了Nginx并在Ubuntu服务器上运行就好了,我甚至可以让它响应请求并将它们代理到我的一个应用程序。然而,当我试图让Nginx将流量代理到第二个应用程序时,我遇到了障碍。
一些背景知识:
我的Nginx配置:
upstream mydomain.com {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
# the nginx server instance
server {
listen 0.0.0.0:80 default_server;
access_log /var/log/nginx/mydomain.log;
location /app2 {
rewrite /app2/(.*) /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8002;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8001;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
任何关于流量进入/ app2时会发生什么的见解/我会非常感激!
答案 0 :(得分:27)
proxy_pass http://127.0.0.1:8002/1; <-- these should be
proxy_pass http://**my_upstream_name**; <--these
然后
upstream my_upstream_name {
//Ngixn do a round robin load balance, some users will conect to / and othes to /app2
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
一些提示控制代理:
看看here @nginx docs
然后我们走了:
weight = NUMBER - 设置服务器的权重,如果没有设置权重等于1。不平衡默认循环。
max_fails = NUMBER - 与服务器通信失败的次数 在该时间段内(由参数fail_timeout指定),在此之后进行考虑 不起作用。如果未设置,则尝试次数为1。 值为0将关闭此检查。什么被认为是失败是由proxy_next_upstream或fastcgi_next_upstream定义的(除了http_404错误,不计入max_fails)。
fail_timeout = TIME - 必须发生的时间* max_fails *与服务器通信失败的次数,导致服务器被视为无效,以及服务器的时间将被视为无效(在进行另一次尝试之前)。 如果未设置,则时间为10秒。 fail_timeout与上游响应时间无关,使用proxy_connect_timeout和proxy_read_timeout来控制它。
down - 将服务器标记为永久脱机,与指令ip_hash一起使用。
备份 - (0.6.7或更高版本)仅在非备份服务器全部关闭或忙碌时才使用此服务器(不能与指令ip_hash一起使用)
EXAMPLE generic
upstream my_upstream_name {
server backend1.example.com weight=5;
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server unix:/tmp/backend3;
}
// proxy_pass http://my_upstream_name;
这些是您所需要的:
如果您只想控制一个应用程序的vhost之间的负载:
upstream my_upstream_name{
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8081 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8082 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8083 backup;
// proxy_pass http://my_upstream_name;
// amazingness no.1, the keyword "backup" means that this server should only be used when the rest are non-responsive
}
如果您有2个或更多应用:每个应用上游1个,例如:
upstream my_upstream_name{
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8081 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8082 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8083 backup;
}
upstream my_upstream_name_app2 {
server 127.0.0.1:8084 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8085 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8086 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8087 backup;
}
upstream my_upstream_name_app3 {
server 127.0.0.1:8088 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8089 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8090 max_fails=3 fail_timeout=30s;
server 127.0.0.1:8091 backup;
}
希望它有所帮助。
答案 1 :(得分:0)
人们正在寻找Nginx的替代方案:为每个Meteor应用安装群集包,该包将自动处理负载平衡。 https://github.com/meteorhacks/cluster
如何设置:
# You can use your existing MONGO_URL for this
export CLUSTER_DISCOVERY_URL=mongodb://host:port/db,
# this is the direct URL to your server (it could be a private URL)
export CLUSTER_ENDPOINT_URL=http://ipaddress
# mark your server as a web service (you can set any name for this)
export CLUSTER_SERVICE=web
示例设置:
{
"ip-1": {
"endpointUrl": "http://ip-1",
"balancerUrl": "https://one.bulletproofmeteor.com"
},
"ip-2": {
"endpointUrl": "http://ip-2",
"balancerUrl": "https://two.bulletproofmeteor.com"
},
"ip-3": {
"endpointUrl": "http://ip-3",
"balancerUrl": "https://three.bulletproofmeteor.com"
},
"ip-4": {
"endpointUrl": "http://ip-4"
}
}