使用Nginx,node-http-proxy掩盖IP地址

时间:2018-04-26 21:33:40

标签: node.js docker nginx reverse-proxy node-http-proxy

首先,我想为这篇长篇大论道歉!

我几乎要把所有事情搞清楚了!我想要做的是使用node-http-proxy来屏蔽我从MySQL数据库获得的一系列动态IP。我这样做是通过将子域重定向到node-http-proxy并从那里解析它。我能够在本地做到这一点没有任何问题。

远程地,它是在启用了HTTPS的Nginx Web服务器后面(我有通过Let的加密发布的通配符证书,以及域的Comodo SSL)。我设法配置它,所以它将它传递给node-http-proxy没有问题。唯一的问题是后者给了我

 The error is { Error: connect ECONNREFUSED 127.0.0.1:80
     at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
   errno: 'ECONNREFUSED',
   code: 'ECONNREFUSED',
   syscall: 'connect',
   address: '127.0.0.1',
   port: 80 }

每当我设置:

proxy.web(req, res, { target, ws: true }

而且我不知道问题是否是远程地址(因为我能够通过辅助设备连接,所以不太可能),或者我配置错误的nginx(极有可能)。也有可能它与正在收听端口80的Nginx冲突。但我不知道为什么node-http-proxy将通过端口80连接

其他一些信息: 还有一个并行运行的Ruby on Rails应用程序。 节点-http-proxy,nginx,ruby on rails在每个自己的Docker容器中运行。我不认为这是Docker的问题,因为我能够在没有任何问题的情况下进行本地测试。

这是我当前的nginx.conf(出于安全原因,我已经替换了example.com的域名)

server_name "~^\d+\.example\.co$";是我希望它重定向到node-http-proxy的地方,而example.com是Ruby on Rails应用程序所在的位置。

# https://codepany.com/blog/rails-5-and-docker-puma-nginx/
# This is the port the app is currently exposing.
# Please, check this: https://gist.github.com/bradmontgomery/6487319#gistcomment-1559180  

upstream puma_example_docker_app {
  server app:5000;
}


server {
    listen 80 default_server;
    listen [::]:80 default_server;

    # Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
    # Enable once you solve wildcard subdomain issue.
    return 301 https://$host$request_uri;
}

server {

  server_name "~^\d+\.example\.co$";

  # listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  # ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  # ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;




  location / {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://ipmask_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }
}





# SSL configuration was obtained through Mozilla's 
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
server {

server_name localhost example.co www.example.co; #puma_example_docker_app;

# listen 80;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  # Created by Certbot
  # ssl_certificate /etc/letsencrypt/live/example.co/fullchain.pem;
  #ssl_certificate_key /etc/letsencrypt/live/example.co/privkey.pem;
  # include /etc/letsencrypt/options-ssl-nginx.conf;
  # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

    # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
  ssl_certificate_key /etc/ssl/private/example.co.key;
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  # This is generated by ourselves. 
  ssl_dhparam /etc/ssl/certs/dhparam.pem;

  # intermediate configuration. tweak to your needs.
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
  ssl_prefer_server_ciphers on;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # OCSP Stapling ---
  # fetch OCSP records from URL in ssl_certificate and cache them
  ssl_stapling on;
  ssl_stapling_verify on;

  ## verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /etc/ssl/certs/trusted.crt;

  # resolver 127.0.0.1;
  # https://support.comodo.com/index.php?/Knowledgebase/Article/View/1091/37/certificate-installation--nginx

  # The above was generated through Mozilla's SSL Config Generator
  # https://mozilla.github.io/server-side-tls/ssl-config-generator/

  # This is important for Rails to accept the headers, otherwise it won't work:
  # AKA. => HTTP_AUTHORIZATION_HEADER Will not work!
  underscores_in_headers on; 

  client_max_body_size 4G;
  keepalive_timeout 10;

  error_page 500 502 504 /500.html;
  error_page 503 @503;


  root /var/www/example/public;
  try_files $uri/index.html $uri @puma_example_docker_app;

  # This is a new configuration and needs to be tested.
  # Final slashes are critical
  # https://stackoverflow.com/a/47658830/1057052
  location /kibana/ {
      auth_basic "Restricted";
      auth_basic_user_file /etc/nginx/.htpasswd;
      #rewrite ^/kibanalogs/(.*)$ /$1 break;
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;

      proxy_pass http://kibana:5601/;

  }


  location @puma_example_docker_app {
    # https://www.digitalocean.com/community/questions/error-too-many-redirect-on-nginx
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;

    proxy_pass http://puma_example_docker_app;
    # limit_req zone=one;
    access_log /var/www/example/log/nginx.access.log;
    error_log /var/www/example/log/nginx.error.log;
  }

  location ~ ^/(assets|images|javascripts|stylesheets)/   {    
      try_files $uri @rails;     
      access_log off;    
      gzip_static on; 

      # to serve pre-gzipped version     
      expires max;    
      add_header Cache-Control public;     

      add_header Last-Modified "";    
      add_header ETag "";    
      break;  
   } 

  location = /50x.html {
    root html;
  }

  location = /404.html {
    root html;
  }

  location @503 {
    error_page 405 = /system/maintenance.html;
    if (-f $document_root/system/maintenance.html) {
      rewrite ^(.*)$ /system/maintenance.html break;
    }
    rewrite ^(.*)$ /503.html break;
  }

  if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
    return 405;
  }

  if (-f $document_root/system/maintenance.html) {
    return 503;
  }

  location ~ \.(php|html)$ {
    return 405;
  }
}

当前的docker-compose文件:

# This is a docker compose file that will pull from the private
# repo and will use all the images. 
# This will be an equivalent for production.

version: '3.2'
services:
  # No need for the database in production, since it will be connecting to one
  # Use this while you solve Database problems
  app:
    image: myrepo/rails:latest
    restart: always
    environment:
      RAILS_ENV: production
      # What this is going to do is that all the logging is going to be printed into the console. 
      # Use this with caution as it can become very verbose and hard to read.
      # This can then be read by using docker-compose logs app.
      RAILS_LOG_TO_STDOUT: 'true'
      # RAILS_SERVE_STATIC_FILES: 'true'
    # The first command, the remove part, what it does is that it eliminates a file that 
    # tells rails and puma that an instance is running. This was causing issues, 
    # https://github.com/docker/compose/issues/1393
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
    # volumes:
    #   - /var/www/cprint
    ports:
      - "5000:5000"
    expose:
      - "5000"
    networks:
      - elk
    links:
      - logstash
  # Uses Nginx as a web server (Access everything through http://localhost)
  # https://stackoverflow.com/questions/30652299/having-docker-access-external-files
  # 
  web:
    image: myrepo/nginx:latest
    depends_on:
      - elasticsearch
      - kibana
      - app
      - ipmask
    restart: always
    volumes:
      # https://stackoverflow.com/a/48800695/1057052
      # - "/etc/ssl/:/etc/ssl/"
      - type: bind
        source: /etc/ssl/certs
        target: /etc/ssl/certs
      - type: bind
        source: /etc/ssl/private/
        target: /etc/ssl/private
      - type: bind
        source: /etc/nginx/.htpasswd
        target: /etc/nginx/.htpasswd
      - type: bind
        source: /etc/letsencrypt/
        target: /etc/letsencrypt/
    ports:
      - "80:80"
      - "443:443"
    networks:
      - elk
      - nginx
    links:
      - elasticsearch
      - kibana
  # Defining the ELK Stack! 
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
    container_name: elasticsearch
    networks:
      - elk
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
      # - ./elk/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
  logstash:
    image: docker.elastic.co/logstash/logstash:6.2.3
    container_name: logstash
    volumes:
      - ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      # This is the most important part of the configuration
      # This will allow Rails to connect to it. 
      # See application.rb for the configuration!
      - ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
    command: logstash -f /etc/logstash/conf.d/logstash.conf
    ports:
      - "5228:5228"
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  kibana:
    image: docker.elastic.co/kibana/kibana:6.2.3
    volumes:
      - ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - "5601:5601"
    networks:
      - elk
    links:
      - elasticsearch
    depends_on:
      - elasticsearch
  ipmask:
    image: myrepo/proxy:latest
    command: "npm start"
    restart: always
    environment:
      - "NODE_ENV=production"
    expose:
      - "5050"
    ports:
      - "4430:80"
    links:
      - app
    networks:
      - nginx


# # Volumes are the recommended storage mechanism of Docker. 
volumes:
  elasticsearch:
    driver: local
  rails:
    driver: local

networks:
    elk:
      driver: bridge
    nginx:
      driver: bridge

非常感谢!

1 个答案:

答案 0 :(得分:0)

Waaaaaaitttt。代码有没有问题!

问题是我试图传递一个乏味的IP地址,而不是在它之前附加http!通过附加HTTP一切正常!!

示例:

我在做:

proxy.web(req, res, { target: '128.29.41.1', ws: true })

事实上这就是答案:

proxy.web(req, res, { target: 'http://128.29.41.1', ws: true })