docker-compose控制台输出出现问题

时间:2019-03-22 23:30:02

标签: node.js docker express docker-compose mocha

问题

我在开发时运行docker-compose up,所以我只需要快速查看一下终端(使用集成vs代码终端),看看我的单元测试,皮棉工作以及其他是否运行良好。

就像我要在API中console.log进行操作一样,它只是在终端中弹出,我可以从中进行调试。

但是,从今天下午开始,我没有来自所有容器的日志,而是来自容器transpilerkibanaapm-server的日志。

我要解决什么问题

我曾经做过一个ctrl + s来触发linter和mocha容器(因为这两个容器都使用nodemon,所以修改文件将使其输出),然后将打字稿文件构建为js(在监视模式下为编译器),并且让他们将所有内容输出到终端。

即使我在代码中放入了api,也没有mochalinterconsole.log的输出...

我没有进行任何重大更新,只是切换了计算机(均为装有docker的ubuntu linux),我不知道如何解决此问题

docker-compose.yml文件

version: "3.3"
services:

  api:
    container_name: api
    build: .
    env_file:
      - .env
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - 9000:9000
    restart: always
    depends_on:
      - mongo
      - elasticsearch
    command: sh -c "mkdir -p dist && touch ./dist/app.js && yarn run start"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/api/v1/ping"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  transpiler:
    container_name: transpiler
    build: .
    restart: always
    volumes:
      - .:/app
      - /app/node_modules
    command: yarn run transpile -w

  linter:
    container_name: linter
    build: .
    restart: always
    volumes:
      - .:/app
      - /app/node_modules
    # https://github.com/yarnpkg/yarn/issues/5457 --silent not working
    command: nodemon --delay 500ms --exec yarn run lint

  mongo:
    container_name: mongo
    image: mongo:4.0
    restart: always
    ports:
      - 27017:27017
    command: mongod
    volumes:
      - ./db/mongodb:/data/db

  mongo_express:
    container_name: mongo_express
    restart: always
    image: mongo-express
    ports:
      - 8081:8081
    depends_on:
      - mongo
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8081"]
      interval: 2m30s
      timeout: 10s
      retries: 3

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
    container_name: elasticsearch
    restart: always
    volumes:
      - ./db/elasticsearch:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
      - discovery.type=single-node
    ports:
      - 9300:9300
      - 9200:9200
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9200"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  kibana:
    container_name: kibana
    restart: always
    image: docker.elastic.co/kibana/kibana:6.6.0
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5601"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  logstash:
    container_name: logstash
    restart: always
    image: docker.elastic.co/logstash/logstash:6.6.0
    ports:
      - 9600:9600
    environment:
      - KILL_ON_STOP_TIMEOUT=1
    volumes:
      - ./logstash/settings/:/usr/share/logstash/config/
      - ./logstash/pipeline/:/usr/share/logstash/pipeline/
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9600"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  apm-server:
    container_name: apm_server
    restart: always
    image: docker.elastic.co/apm/apm-server:6.6.0
    ports:
      - 8200:8200
    volumes:
      - ./apm_settings/apm-server.yml:/usr/share/apm-server/apm-server.yml
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8200"]
      interval: 1m30s
      timeout: 10s
      retries: 3

  mocha:
    container_name: mocha
    restart: always
    build: .
    volumes:
      - .:/app
      - /app/node_modules
    command: nodemon --delay 500ms --exec yarn run test-coverage
    env_file:
      - .env
    environment:
      NODE_ENV: 'test'

volumes:
  esdata:

Dockerfile

FROM mhart/alpine-node:10
ADD . /app
WORKDIR /app

RUN apk add --no-cache --virtual .gyp g++ libtool make python curl &&\
    yarn &&\
    yarn global add nodemon &&\
    apk del .gyp

数据样本

当我运行docker-up时,所有输出都很好:

mongo            | 2019-03-22T23:11:26.048+0000 I NETWORK  [conn6] end connection 172.22.0.8:52266 (3 connections now open)
apm_server       | 2019-03-22T23:11:26.048Z     INFO    [request]       beater/v2_handler.go:96 error handling request  {"request_id": "77b88109-c7c0-41a2-a28c-2343a82862bd", "method": "POST", "URL": "/intake/v2/events", "content_length": -1, "remote_address": "172.22.0.8", "user-agent": "elastic-apm-node/2.6.0 elastic-apm-http-client/7.1.1", "error": "unexpected EOF"}
api              | [nodemon] app crashed
api              | error Command failed with exit code 1.
api              | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
mocha            | 
mocha            | 
mocha            | Express server listening on port 9000, in test mode
mocha            |   GET PING ressource
mocha            |     GET /api/v1/ ping/
mongo            | 2019-03-22T23:11:27.951+0000 I NETWORK  [listener] connection accepted from 172.22.0.2:39956 #8 (4 connections now open)
mongo            | 2019-03-22T23:11:27.961+0000 I NETWORK  [conn8] received client metadata from 172.22.0.2:39956 conn8: { driver: { name: "nodejs", version: "3.1.13" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.20.7-042007-generic" }, platform: "Node.js v10.15.3, LE, mongodb-core: 3.1.11" }
mongo            | 2019-03-22T23:11:28.051+0000 I NETWORK  [listener] connection accepted from 172.22.0.2:39958 #9 (5 connections now open)
mongo            | 2019-03-22T23:11:28.197+0000 I NETWORK  [listener] connection accepted from 172.22.0.2:39962 #10 (6 connections now open)
mocha            |       ✓ ping api (154ms)

是的,我确实知道那些日志显示了一些错误,但是我主要关心的是仍然将它们输出到终端中

但是执行ctrl + s仅显示以下内容:(这是我真正的问题):

[10:59:15 PM] File change detected. Starting incremental compilation...
transpiler       | 
transpiler       | [10:59:15 PM] Found 0 errors. Watching for file changes.
transpiler       | 
apm_server       | 2019-03-22T22:59:40.309Z     INFO    [request]       beater/common_handlers.go:272   handled request {"request_id": "5948c9ee-c6fd-42ad-bd1e-acc259e1634c", "method": "POST", "URL": "/intake/v2/events", "content_length": -1, "remote_address": "172.22.0.11", "user-agent": "elastic-apm-node/2.6.0 elastic-apm-http-client/7.1.1", "response_code": 202}
kibana           | {"type":"response","@timestamp":"2019-03-22T22:59:44Z","tags":[],"pid":1,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"user-agent":"curl/7.29.0","host":"localhost:5601","accept":"*/*"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},"res":{"statusCode":302,"responseTime":7,"contentLength":9},"message":"GET / 302 7ms - 9.0B"}

我尝试过(但没有用)

  • 删除所有容器
  • 删除所有容器及其容积
  • 删除所有容器及其体积和所有图像
  • 重新启动
  • 删除所有内容后
  • 重建(docker-compose build
  • 从一个简单的终端运行docker-compose up cmd,以确保它与vs代码集成终端无关。
  • 重新启动docker服务(sudo systemctl restart docker

1 个答案:

答案 0 :(得分:0)

当您重建所有内容时,很可能在某个地方的npm软件包中发生了某些更改(可能是您不知道的依赖项)。

您还说过,您切换计算机后,它仍能在以前的计算机和操作系统上正常工作吗?