在Docker上运行Docker

时间:2018-03-04 14:51:42

标签: docker continuous-integration concourse

我有以下Dockerfile:

FROM golang:1.9.4

RUN apt-get update &&\
    apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common &&\
    curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | apt-key add - &&\
    apt-key fingerprint 0EBFCD88 &&\
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" &&\
    apt-get update &&\
    apt-get -y install docker-ce &&\
    curl -L https://github.com/docker/compose/releases/download/1.20.0-rc1/docker-compose-Linux-x86_64 -o /usr/local/bin/docker-compose &&\
    chmod +x /usr/local/bin/docker-compose

我可以在本地构建和运行(在特权模式下)并成功运行dockerd。但是,当在concourse(v3.8.0)版本中使用此docker镜像时,我无法运行dockerd。

这是我的大厅管道的一部分:

jobs:
- name: build_thing
  public: true
  plan:
  - get: pr-git-repo-thing
    trigger: true
    version: every
  - get: golang-tools-docker-image
  - task: "test_build_thing"
    privileged: true
    image: golang-tools-docker-image
    config:
      platform: linux
      inputs:
      - name: pr-git-repo-thing
      outputs:
      - name: build-workspace
      run:
         ...............

大厅输出:

INFO[2018-03-04T11:38:57.333893024Z] libcontainerd: started new docker-containerd process  pid=39
INFO[0000] starting containerd                           module=containerd revision=9b55aab90508bd389d7654c4baf173a981477d55 version=v1.0.1
INFO[0000] loading plugin "io.containerd.content.v1.content"...  module=containerd type=io.containerd.content.v1
INFO[0000] loading plugin "io.containerd.snapshotter.v1.btrfs"...  module=containerd type=io.containerd.snapshotter.v1
WARN[0000] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module=containerd
INFO[0000] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  module=containerd type=io.containerd.snapshotter.v1
INFO[0000] loading plugin "io.containerd.metadata.v1.bolt"...  module=containerd type=io.containerd.metadata.v1
WARN[0000] could not use snapshotter btrfs in metadata plugin  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" module="containerd/io.containerd.metadata.v1.bolt"
INFO[0000] loading plugin "io.containerd.differ.v1.walking"...  module=containerd type=io.containerd.differ.v1
INFO[0000] loading plugin "io.containerd.gc.v1.scheduler"...  module=containerd type=io.containerd.gc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.containers"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.content"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.diff"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.events"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.healthcheck"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.images"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.leases"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.namespaces"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.snapshots"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.monitor.v1.cgroups"...  module=containerd type=io.containerd.monitor.v1
INFO[0000] loading plugin "io.containerd.runtime.v1.linux"...  module=containerd type=io.containerd.runtime.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.tasks"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.version"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin "io.containerd.grpc.v1.introspection"...  module=containerd type=io.containerd.grpc.v1
INFO[0000] serving...                                    address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug"
INFO[0000] serving...                                    address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc"
INFO[0000] containerd successfully booted in 0.013967s   module=containerd
ERRO[2018-03-04T11:38:58.753846612Z] 'overlay2' is not supported over overlayfs   
INFO[2018-03-04T11:38:58.790892207Z] Graph migration to content-addressability took 0.00 seconds 
WARN[2018-03-04T11:38:58.791241530Z] Your kernel does not support cgroup memory limit 
WARN[2018-03-04T11:38:58.791258775Z] Unable to find cpu cgroup in mounts          
WARN[2018-03-04T11:38:58.791301618Z] Unable to find blkio cgroup in mounts        
WARN[2018-03-04T11:38:58.791337877Z] Unable to find cpuset cgroup in mounts       
WARN[2018-03-04T11:38:58.791487202Z] mountpoint for pids not found                
Error starting daemon: Devices cgroup isn't mounted

3 个答案:

答案 0 :(得分:1)

运行此脚本可解决此问题:

#!/usr/bin/env bash


mkdir -p /sys/fs/cgroup
mountpoint -q /sys/fs/cgroup || \
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup

mount -o remount,rw /sys/fs/cgroup

sed -e 1d /proc/cgroups | while read sys hierarchy num enabled; do
if [ "$enabled" != "1" ]; then
  # subsystem disabled; skip
  continue
fi

grouping="$(cat /proc/self/cgroup | cut -d: -f2 | grep "\\<$sys\\>")"
if [ -z "$grouping" ]; then
  # subsystem not mounted anywhere; mount it on its own
  grouping="$sys"
fi

mountpoint="/sys/fs/cgroup/$grouping"

mkdir -p "$mountpoint"

# clear out existing mount to make sure new one is read-write
if mountpoint -q "$mountpoint"; then
  umount "$mountpoint"
fi

mount -n -t cgroup -o "$grouping" cgroup "$mountpoint"

if [ "$grouping" != "$sys" ]; then
  if [ -L "/sys/fs/cgroup/$sys" ]; then
    rm "/sys/fs/cgroup/$sys"
  fi

  ln -s "$mountpoint" "/sys/fs/cgroup/$sys"
fi
done

答案 1 :(得分:0)

我使用dcind(Docker-Compose-in-Docker)。所有示例都写在这里https://github.com/meAmidos/dcind

- name: integration
plan:
  - aggregate:
    - get: code
      params: {depth: 1}
      passed: [unit-tests]
      trigger: true
    - get: redis
      params: {save: true}
    - get: busybox
      params: {save: true}
  - task: Run integration tests
    privileged: true
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: amidos/dcind
      inputs:
        - name: code
        - name: redis
        - name: busybox
      run:
        path: sh
        args:
          - -exc
          - |
            source /docker-lib.sh
            start_docker

            # Strictly speaking, preloading of Docker images is not required.
            # However, you might want to do this for a couple of reasons:
            # - If the image comes from a private repository, it is much easier to let Concourse pull it,
            #   and then pass it through to the task.
            # - When the image is passed to the task, Concourse can often get the image from its cache.
            docker load -i redis/image
            docker tag "$(cat redis/image-id)" "$(cat redis/repository):$(cat redis/tag)"

            docker load -i busybox/image
            docker tag "$(cat busybox/image-id)" "$(cat busybox/repository):$(cat busybox/tag)"

            # This is just to visually check in the log that images have been loaded successfully
            docker images

            # Run the container with tests and its dependencies.
            docker-compose -f code/example/integration.yml run tests

            # Cleanup.
            # Not sure if this is required.
            # It's quite possible that Concourse is smart enough to clean up the Docker mess itself.
            docker-compose -f code/example/integration.yml down
            docker volume rm $(docker volume ls -q)

答案 2 :(得分:-1)

如果遇到cggroup错误,下面是步骤:

  1. sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
  2. {{1}}