我制作了一个2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/video/politics/white-house-doubles-down-on-crediting-trump-for-zero-commercial-airline-deaths/2018/01/02/40d0a4a8-effa-11e7-95e3-eff284e71c8d_video.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/video/politics/sanders-president-to-keep-options-open-on-iran-sanctions/2018/01/02/51ee9a00-f000-11e7-95e3-eff284e71c8d_video.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/politics/hatch-announces-he-will-not-seek-re-election/2018/01/02/8f475468-eff2-11e7-95e3-eff284e71c8d_story.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/politics/how-far-is-trump-willing-to-go-on-iran-amid-widespread-protests/2018/01/02/66c0e4a0-efcf-11e7-b390-a36dc3fa2842_story.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/politics/science-says-why-theres-a-big-chill-in-a-warmer-world/2018/01/02/0915cdf6-f016-11e7-95e3-eff284e71c8d_story.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/video/politics/with-hatchs-retirement-trump-is-losing-and-ally--and-might-be-gaining-a-foe/2018/01/02/abaa60dc-f015-11e7-95e3-eff284e71c8d_video.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/local/virginia-politics/in-a-young-county-a-millennial-takes-the-helm-as-board-chairman/2018/01/02/70b13d40-ec17-11e7-b698-91d4e35920a3_story.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.washingtonpost.com/local/md-politics/democrats-slam-hogan-over-rga-donation-from-poultry-company/2018/01/02/db8e6172-ef61-11e7-b3bf-ab90a706e175_story.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
2018-01-02 19:07:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.washingtonpost.com/local/md-politics/democrats-slam-hogan-over-rga-donation-from-poultry-company/2018/01/02/db8e6172-ef61-11e7-b3bf-ab90a706e175_story.html> (referer: https://www.washingtonpost.com/news-politics-sitemap.xml)
Traceback (most recent call last):
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/rahmi/Documents/Honor_Project/scrapyenv/local/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2018-01-02 19:07:12 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-02 19:07:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 36013,
'downloader/request_count': 83,
'downloader/request_method_count/GET': 83,
'downloader/response_bytes': 2127377,
'downloader/response_count': 83,
'downloader/response_status_count/200': 57,
'downloader/response_status_count/301': 26,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 1, 3, 0, 7, 12, 651303),
'log_count/DEBUG': 84,
'log_count/ERROR': 55,
'log_count/INFO': 7,
'memusage/max': 52187136,
'memusage/startup': 52187136,
'request_depth_max': 1,
'response_received_count': 57,
'scheduler/dequeued': 81,
'scheduler/dequeued/memory': 81,
'scheduler/enqueued': 81,
'scheduler/enqueued/memory': 81,
'spider_exceptions/NotImplementedError': 55,
'start_time': datetime.datetime(2018, 1, 3, 0, 7, 10, 174415)}
文件,用于定义我想用AWS Elastic Beanstalk运行的两个Docker镜像。一个是带有静态前端网站和API路由的nginx容器,另一个是后端API。但由于某种原因,容器链接不起作用,并且nginx容器无法将请求转发给API。
每当nginx尝试Dockerrun.aws.json
次请求时,我都会收到此错误:
proxy_pass
从阅读this aws multi-container documentation看起来链接只是自动制作?
这是我的nginx文件:
2018/01/02 22:03:01 [error] 5#5: *22 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.1.70, server: localhost, request: "POST /api/auth/login HTTP/1.1", upstream: "http://172.17.0.2:5000/api/auth/login", host: "my-app-dev.us-west-2.elasticbeanstalk.com", referrer: "http://my-app-dev.us-west-2.elasticbeanstalk.com/login"
我的server {
listen 4000;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
# Matches any request with a file extension. Adds cache headers.
location ~ .*\..* {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# Matches api requests and redirects to the api server's port.
location /api {
proxy_pass http://api:5000;
}
# Any route that doesn't have a file extension. This helps react router route page urls using index.html.
location / {
try_files $uri $uri/ /index.html;
}
}
文件:
Dockerrun.aws.json
同样的设置在{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"name": "proxy",
"image": "my-app/proxy:1",
"essential": true,
"memory": 64,
"portMappings": [
{
"hostPort": 80,
"containerPort": 4000
}
],
"links": [
"api"
]
},
{
"name": "api",
"image": "my-app/api:1",
"essential": true,
"memory": 128
}
]
}
本地工作,但是关于我如何尝试用aws做的事情显然不正确。
答案 0 :(得分:1)
从上面的nginx错误日志中,nginx能够将上游服务器端点解析为172.17.0.2:5000
,这意味着nginx docker容器能够解析api docker容器的ip
抛出的错误是Connection Refused
,可能由
5000
172.17.0.2
)的Docker解析不正确对于第一种情况,您可以通过SSH进入EC2实例并查看api docker容器信息并查看是否公开了5000
端口,如果端口未公开,则应将Dockerfile
修改为暴露端口
第二和第三种情况不应该发生,因为它由AWS负责
无论如何要验证第二种和第三种情况,你可以通过SSH连接到运行docker容器的EC2实例,然后连接到nginx容器并执行telnet 172.17.0.2 5000
,如果没有成功,那么显然是网络问题
答案 1 :(得分:0)
由于内存不足,我的java应用程序似乎无声无响应。升级了我的aws实例并将"memory": 128
更改为"memory": 512
并且它可以正常工作。毕竟没有网络问题。
更好的解决方案是改变记忆&#34; to&#34; memoryReservation&#34;。这样我就定义了所需的最小内存,但它仍然可以使用尽可能多的EC2实例的额外内存。