Celery不适用于docker

时间:2016-02-24 06:26:52

标签: python docker celery

我有问题在码头工具中使用芹菜。

我配置了两个docker容器,web_server和celery_worker。 celery_worker包括rabbitmq-server。 web_server从芹菜工作者调用任务。

我在虚拟机中配置了相同的东西。它有效。但是,docker会说出如下的错误消息。

 Traceback (most recent call last):
  File "/web_server/test/test_v1_data_description.py", line 58, in test_create_description
    headers=self.get_basic_header()

  .........
  .........

  File "../task_runner/__init__.py", line 31, in run_describe_task
    kwargs={})
  File "/usr/local/lib/python3.4/dist-packages/celery/app/base.py", line 349, in send_task
    self.backend.on_task_call(P, task_id)
  File "/usr/local/lib/python3.4/dist-packages/celery/backends/rpc.py", line 32, in on_task_call
    maybe_declare(self.binding(producer.channel), retry=True)
  File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 194, in _get_channel
    channel = self._channel = channel()
  File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 425, in __call__
    value = self.__value__ = self.__contract__()
  File "/usr/local/lib/python3.4/dist-packages/kombu/messaging.py", line 209, in <lambda>
    channel = ChannelPromise(lambda: connection.default_channel)
  File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 756, in default_channel
    self.connection
  File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 741, in connection
    self._connection = self._establish_connection()
  File "/usr/local/lib/python3.4/dist-packages/kombu/connection.py", line 696, in _establish_connection
    conn = self.transport.establish_connection()
  File "/usr/local/lib/python3.4/dist-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
    conn = self.Connection(**opts)
  File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 165, in __init__
    self.transport = self.Transport(host, connect_timeout, ssl)
  File "/usr/local/lib/python3.4/dist-packages/amqp/connection.py", line 186, in Transport
    return create_transport(host, connect_timeout, ssl)
  File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 299, in create_transport
    return TCPTransport(host, connect_timeout)
  File "/usr/local/lib/python3.4/dist-packages/amqp/transport.py", line 95, in __init__
    raise socket.error(last_err)
nose.proxy.OSError: [Errno 111] Connection refused

这些是两个容器的Dockerfiles。

用于web_server的Dockerfile。

 FROM ubuntu:14.04
 MAINTAINER Jinho Yoo 

 # Update packages.
 RUN apt-get clean
 RUN apt-get update

 # Create work folder.
 RUN mkdir /web_server
 WORKDIR /web_server

 # Setup web server and celery.
 ADD ./install_web_server_conf.sh ./install_web_server_conf.sh
 RUN chmod +x ./install_web_server_conf.sh
 RUN ./install_web_server_conf.sh

 #Reduce docker size.
 RUN rm -rf /var/lib/apt/lists/*

 # Run web server.
 CMD ["python3","web_server.py"]

 # Expose port.
 EXPOSE 5000

celery_worker的Dockerfile。

FROM ubuntu:14.04
MAINTAINER Jinho Yoo 

# Update packages.
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y wget build-essential ca-certificates-java

# Setup python environment.
ADD ./bootstrap/install_python_env.sh ./install_python_env.sh
RUN chmod +x ./install_python_env.sh
RUN ./install_python_env.sh

# Install Python libraries including celery.
RUN pip3 install -r ./core/requirements.txt

# Add mlcore user for Celery worker
RUN useradd --uid 1234 -M mlcore
RUN usermod -L mlcore

# Celery configuration for supervisor
ADD celeryd.conf /etc/supervisor/conf.d/celeryd.conf
RUN mkdir -p /var/log/celery

# Reduce docker size.
RUN rm -rf /var/lib/apt/lists/*

# Run celery server by supervisor.
CMD ["supervisord", "-c", "/ml_core/supervisord.conf"]

# Expose port.
EXPOSE 8080
EXPOSE 8081
EXPOSE 4040
EXPOSE 7070
EXPOSE 5672
EXPOSE 5671
EXPOSE 15672

2 个答案:

答案 0 :(得分:0)

Docker容器无法正常通话。我的猜测是你的连接字符串类似于localhost:<port>

有几种方法让您的容器能够进行通信。

1:链接 http://rominirani.com/2015/07/31/docker-tutorial-series-part-8-linking-containers/

基本上,在运行时,docker会在hosts文件中添加一个条目,该条目指向同一个私有docker网络堆栈中docker容器的内部IP地址。

2:docker run --net=host: 这会将容器绑定到主机网络堆栈,因此,所有容器似乎都从localhost运行,并且可以这样访问。如果您正在运行绑定到同一外部端口的多个容器,则可能会遇到端口冲突问题,请注意这一点。

3:外部HAProxy: 您可以将DNS条目绑定到HAProxy,并将代理配置为使用与主机的DNS条目匹配的主机头重定向流量:您的容器正在运行的端口,并且来自其他容器的任何调用将“退出”#34; ;私有码头网络堆栈,点击DNS服务器,然后回到HAProxy,它将指向正确的容器。

答案 1 :(得分:0)

我找到了原因。 celery_worker中的Docker容器无法运行rabbitmq-server。所以我在celery_worker的Dockerfile中添加了两行,如下所示。

# Run rabbitmq server and celery.
ENTRYPOINT service rabbitmq-server start && supervisord -c /ml_core/supervisord.conf