我需要在Docker应用程序的“ Web”服务上打开Scrapy的后台侦听器,如下所示:
任务:
@celery.task(
queue='scraping')
def scrape():
params = {
'spider_name': 'spider',
'start_requests':True
}
response = requests.get('http://localhost:9080/crawl.json', params)
return {'Status': 'Scraping completed!',
'features': response}
我的应用程序运行一个nginx反向代理服务器,并且服务的配置如下:
docker-compose.yml:
services:
web:
build:
context: ./services/web
dockerfile: Dockerfile-dev
volumes:
- './services/web:/usr/src/app'
ports:
- 5001:5000
depends_on:
- web-db
- redis
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile-dev
restart: always
ports:
- 80:80
depends_on:
- web
- client
- redis
scrapyrt:
image: vimagick/scrapyd:py3
command: scrapyrt -i 0.0.0.0 -p 9080
restart: always
ports:
- '9080:9080'
volumes:
- ./services/web:/usr/src/app
working_dir: /usr/src/app/project/api
depends_on:
- web
“网络”服务中的路由使用异步任务通过抓取功能发送请求:
@task_bp.route('/blogs/<user_id>', methods=['GET'])
task = scrape.apply_async([user_id])
response_object = {
'status': 'success',
'data': {
'task_id': task.id,
'results': task.get(),
}
}
return jsonify(response_object), 202
卷曲:
curl -X GET http://localhost:5001/blogs/1 -H "Content-Type: application/json"
扭曲的服务器似乎正在工作:
scrapyrt_1 | 2019-05-14 02:12:18+0000 [-] Log opened.
scrapyrt_1 | 2019-05-14 02:12:18+0000 [-] Site starting on 9080
scrapyrt_1 | 2019-05-14 02:12:18+0000 [-] Starting factory <twisted.web.server.Site object at 0x7fcfdc977b70>
但是celery日志向我抛出以下错误(完整回溯):
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/usr/lib/python3.6/site-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3.6/site-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7ff217792a90>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3.6/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=9080): Max retries exceeded with url: /crawl.json?spider_name=allmusic_smooth_tracks&start_requests=True (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff217792a90>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/src/app/brandio/api/routes/background.py", line 904, in scrape_allmusic
response = requests.get('http://localhost:9080/crawl.json', params)
File "/usr/lib/python3.6/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3.6/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.6/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9080): Max retries exceeded with url: /crawl.json?spider_name=allmusic_smooth_tracks&start_requests=True (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff217792a90>: Failed to establish a new connection: [Errno 111] Connection refused',))
我想念什么?
答案 0 :(得分:0)
用目标服务使用的任何名称(localhost
替换scrapyrt
。