CRON作业找不到docker-compose中设置的环境变量

时间:2018-05-18 12:20:05

标签: docker cron docker-compose

我在docker-compose中设置了一些由cron作业运行的python应用程序使用的环境变量。

搬运工-compose.yaml:

version: '2.1'
services:
  zookeeper:
    container_name: zookeeper
    image: zookeeper:3.3.6
    restart: always
    hostname: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
  kafka:
    container_name: kafka
    image: wurstmeister/kafka:1.1.0
    hostname: kafka
    links:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_CREATE_TOPICS: "topic:1:1"
      KAFKA_LOG_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
      KAFKA_MESSAGE_TIMESTAMP_TYPE: LogAppendTime
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
  data-collector:
    container_name: data-collector
    #image: mystreams:0.1
    build:
      context: /home/junaid/eMumba/CASB/Docker/data_collector/
      dockerfile: Dockerfile
    links:
      - kafka
    environment:
      - KAFKA_HOST=kafka
      - OFFICE_365_APP_ID=98aff1c5-7a69-46b7-899c-186851054b43
      - OFFICE_365_APP_SECRET=zVyS/V694ffWe99QpCvYqE1sqeqLo36uuvTL8gmZV0A=
      - OFFICE_365_APP_TENANT=2f6cb1a6-ecb8-4578-b680-bf84ded07ff4
      - KAFKA_CONTENT_URL_TOPIC=o365_activity_contenturl
      - KAFKA_STORAGE_DATA_TOPIC=o365_storage
      - KAFKA_PORT=9092
      - POSTGRES_DB_NAME=casb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=pakistan
      - POSTGRES_HOST=postgres_database
    depends_on:
      postgres_database:
        condition: service_healthy
  postgres_database:
    container_name : postgres_database
    build: 
      context: /home/junaid/eMumba/CASB/Docker/data_collector/
      dockerfile: postgres.dockerfile
    #image: ayeshaemumba/casb-postgres:v3
    #volumes:
    #  - ./postgres/data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: pakistan
      POSTGRES_DB: casb
    expose:
      - "5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 30s
      timeout: 30s
      retries: 3

当我在data-collector容器内执行并回显任何环境变量时,我可以看到它的集合:

># docker exec -it data-collector sh
># echo $KAFKA_HOST
> kafka

但我的cron作业日志显示KeyError: 'KAFKA_HOST' 这意味着我的cron作业无法找到环境变量。

现在我有两个问题:

1)为什么没有为cron作业设置环境变量?

2)我知道我可以将环境变量作为shell脚本传递并在构建映像时运行它。但是有没有办法从docker-compose传递环境变量?

更新

Cron作业在docker文件中为python应用程序定义。

Dockerfile:

FROM python:3.5-slim

# Creating Application Source Code Directory
RUN mkdir -p /usr/src/app

# Setting Home Directory for containers
WORKDIR /usr/src/app

# Installing python dependencies
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt

# Copying src code to Container
COPY . /usr/src/app

# Add storage crontab file in the cron directory
ADD crontab-storage /etc/cron.d/storage-cron

# Give execution rights on the storage cron job
RUN chmod 0644 /etc/cron.d/storage-cron

RUN chmod 0644 /usr/src/app/cron_storage_data.sh

# Create the log file to be able to run tail
RUN touch /var/log/cron.log

#Install Cron
RUN apt-get update
RUN apt-get -y install cron

# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

的crontab存储

*/1 * * * * sh /usr/src/app/cron_storage_data.sh
# Don't remove the empty line at the end of this file. It is required to run the cron job

cron_storage_data.sh:

#!/bin/bash
cd /usr/src/app
/usr/local/bin/python3.5 storage_data_collector.py

1 个答案:

答案 0 :(得分:1)

Cron默认情况下不继承docker-compose环境变量。对于这种情况,可能的解决方法是:

1。将环境变量从docker-compose传递到本地.env文件

touch .env
echo "export KAFKA_HOST=$KAFKA_HOST" > /usr/src/app/.env

2。source .env任务执行前的cron文件

* * * * * <username> . /usr/src/app/.env && sh /usr/src/app/cron_storage_data.sh

这是您的更新环境文件的外观:

之前:

{'SHELL': '/bin/sh', 'PWD': '/root', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}

之后:

{'SHELL': '/bin/sh', 'PWD': '/root', 'KAFKA_HOST': 'kafka', 'LOGNAME': 'root', 'PATH': '/usr/bin:/bin', 'HOME': '/root'}