在Docker上构建kubernetes

时间:2018-05-22 00:04:17

标签: docker kubernetes

操作系统:CentOS 7 docker version 1.13.1

我正在尝试在centos上安装kubernetes以在内部运行。我使用docker上的构建来构建它,因为使用go构建不起作用。关于依赖性和细节,文档非常差。

我按照kubernetes网站上的说明进行操作:https://github.com/kubernetes/kubernetes

[kubernetes]$ git clone https://github.com/kubernetes/kubernetes
[kubernetes]$ cd kubernetes
[kubernetes]$ make quick-release
+++ [0521 22:31:10] Verifying Prerequisites....
+++ [0521 22:31:17] Building Docker image kube-build:build-e7afc7a916-5-v1.10.2-1
+++ [0521 22:33:45] Creating data container kube-build-data-e7afc7a916-5-v1.10.2-1
+++ [0521 22:34:57] Syncing sources to container
+++ [0521 22:35:15] Running build command...
+++ [0521 22:36:02] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
+++ [0521 22:36:14] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/defaulter-gen
+++ [0521 22:36:21] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/conversion-gen
+++ [0521 22:36:31] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/openapi-gen
+++ [0521 22:36:40] Building go targets for linux/amd64:
    ./vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [0521 22:36:42] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/cloud-controller-manager
    cmd/kubelet
    cmd/kubeadm
    cmd/hyperkube
    cmd/kube-scheduler
    vendor/k8s.io/kube-aggregator
    vendor/k8s.io/apiextensions-apiserver
    cluster/gce/gci/mounter
+++ [0521 22:40:24] Building go targets for linux/amd64:
    cmd/kube-proxy
    cmd/kubeadm
    cmd/kubelet
+++ [0521 22:41:08] Building go targets for linux/amd64:
    cmd/kubectl
+++ [0521 22:41:31] Building go targets for linux/amd64:
    cmd/gendocs
    cmd/genkubedocs
    cmd/genman
    cmd/genyaml
    cmd/genswaggertypedocs
    cmd/linkcheck
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
+++ [0521 22:44:24] Building go targets for linux/amd64:
    cmd/kubemark
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e_node/e2e_node.test
+++ [0521 22:45:24] Syncing out of container
+++ [0521 22:46:39] Building tarball: src
+++ [0521 22:46:39] Building tarball: manifests
+++ [0521 22:46:39] Starting tarball: client darwin-386
+++ [0521 22:46:39] Starting tarball: client darwin-amd64
+++ [0521 22:46:39] Starting tarball: client linux-386
+++ [0521 22:46:39] Starting tarball: client linux-amd64
+++ [0521 22:46:39] Starting tarball: client linux-arm
+++ [0521 22:46:39] Starting tarball: client linux-arm64
+++ [0521 22:46:39] Starting tarball: client linux-ppc64le
+++ [0521 22:46:39] Starting tarball: client linux-s390x
+++ [0521 22:46:39] Starting tarball: client windows-386
+++ [0521 22:46:39] Starting tarball: client windows-amd64
+++ [0521 22:46:39] Waiting on tarballs
+++ [0521 22:47:19] Building tarball: server linux-amd64
+++ [0521 22:47:19] Building tarball: node linux-amd64
+++ [0521 22:47:47] Starting docker build for image: cloud-controller-manager-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-apiserver-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-controller-manager-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-scheduler-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-aggregator-amd64
+++ [0521 22:47:47] Starting docker build for image: kube-proxy-amd64
+++ [0521 22:47:47] Building hyperkube image for arch: amd64
+++ [0521 22:48:31] Deleting docker image k8s.gcr.io/kube-scheduler:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:31] Deleting docker image k8s.gcr.io/kube-aggregator:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:41] Deleting docker image k8s.gcr.io/kube-controller-manager:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:43] Deleting docker image k8s.gcr.io/cloud-controller-manager:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:46] Deleting docker image k8s.gcr.io/kube-apiserver:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:48:58] Deleting docker image k8s.gcr.io/kube-proxy:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:49:36] Deleting hyperkube image k8s.gcr.io/hyperkube-amd64:v1.12.0-alpha.0.143_080739a12a25bc
+++ [0521 22:49:36] Docker builds done
+++ [0521 22:50:54] Building tarball: final
+++ [0521 22:50:54] Building tarball: test
  1. 我的第一个问题是,为什么在构建结束时,docker会删除kube-apiserver,kube-proxy等。这些是我期望使用的工具。

  2. 第二个问题,为什么我现在只有一个' kube-build'图片。我该如何与此互动?除了kube构建之外,我期待看到kubeadm和kubectl 文档中没有说明下一步该做什么。如何创建pod,部署容器并对其进行管理。我期待在kubectl / kubeadm图像上使用docker attach执行此操作,但没有。

    $ docker images
    REPOSITORY                               TAG                            IMAGE ID            CREATED             SIZE
    kube-build                               build-e7afc7a916-5-v1.10.2-1   8d27a8ba87fd        About an hour ago   2.58 GB
    docker.io/node                           latest                         f697cb5f31f8        12 days ago         675 MB
    docker.io/redis                          latest                         bfcb1f6df2db        2 weeks ago         107 MB
    docker.io/mongo                          latest                         14c497d5c758        3 weeks ago         366 MB
    docker.io/nginx                          latest                         ae513a47849c        3 weeks ago         109 MB
    
  3. 那么究竟有人应该如何处理' kube-build'图片。任何帮助都会很棒。谢谢!

    此外,我试图标记这个' kube-build'因为这是确切的图像名称,但我没有足够的声誉来制作新标签。

1 个答案:

答案 0 :(得分:1)

首先,构建的结果位于文件夹_output

[@_output]# ls
dockerized  images  release-images  release-stage  release-tars

在文件夹release-images\$your_architecture中,您可以在tarball中找到图像:

[@release-images]# cd amd64/
[@amd64]# ls
cloud-controller-manager.tar  hyperkube-amd64.tar  kube-aggregator.tar  kube-apiserver.tar  kube-controller-manager.tar  kube-proxy.tar  kube-scheduler.tar

您可以使用以下命令将它们导入本地docker repo:

cat kube-apiserver.tar | docker import - kube-api:new

您将在本地docker image repo中找到的结果:

[@amd64]# docker images
REPOSITORY                                 TAG                            IMAGE ID            CREATED             SIZE
kube-api                                   new                            4bd734072676        7 minutes ago       183MB

您还可以在文件夹release-tars中找到包含二进制文件的tarball。

通常,Kubernetes是在一台服务器上构建的,然后在另一台服务器上使用,这就是为什么你有文件夹_output来构建你的结果。