无法在本地ubuntu集群上使用DNS部署kubernetes(因此一个节点)

时间:2016-06-12 09:48:08

标签: ubuntu dns kubernetes

无法在本地ubuntu群集(因此一个节点)上使用DNS部署kubernetes。我认为它可能与法兰绒有关但是我不确定,更重要的是我不确定为什么当我尝试在ubuntu上部署时它指向coreos。我不得不在cluster / ubuntu下的config-default.sh中更改一些东西让我甚至得到这个然而这个错误我无法解决并最终无法用dns启动kubernetes。

以下是我的错误跟踪。我不确定以下错误跟踪中的以下行是否是我无法部署kube-up.sh

的原因
Error: 100: Key not found (/coreos.com) [1]
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}
ERROR TRACE

$KUBERNETES_PROVIDER=ubuntu ./kube-up.sh // ran this command on terminal
... Starting cluster using provider: ubuntu
... calling verify-prereqs
... calling kube-up
~/kubernetes/cluster/ubuntu ~/kubernetes/cluster
Prepare flannel 0.5.0 release ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 608 0 608 0 0 102 0 --:--:-- 0:00:05 --:--:-- 138
100 2757k 100 2757k 0 0 194k 0 0:00:14 0:00:14 --:--:-- 739k
Prepare etcd 2.2.0 release ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 606 0 606 0 0 101 0 --:--:-- 0:00:05 --:--:-- 175
100 7183k 100 7183k 0 0 468k 0 0:00:15 0:00:15 --:--:-- 1871k
Prepare kubernetes 1.2.4 release ...
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory
~/kubernetes/cluster

Deploying master and node on machine 192.168.245.244
make-ca-cert.sh 100% 4028 3.9KB/s 00:00

easy-rsa.tar.gz 100% 42KB 42.4KB/s 00:00

config-default.sh 100% 5419 5.3KB/s 00:00

util.sh 100% 29KB 28.6KB/s 00:00

kubelet.conf 100% 644 0.6KB/s 00:00

kube-proxy.conf 100% 684 0.7KB/s 00:00

kubelet 100% 2158 2.1KB/s 00:00

kube-proxy 100% 2233 2.2KB/s 00:00

kube-scheduler.conf 100% 674 0.7KB/s 00:00

etcd.conf 100% 709 0.7KB/s 00:00

kube-controller-manager.conf 100% 744 0.7KB/s 00:00

kube-apiserver.conf 100% 674 0.7KB/s 00:00

kube-apiserver 100% 2358 2.3KB/s 00:00

kube-scheduler 100% 2360 2.3KB/s 00:00

kube-controller-manager 100% 2672 2.6KB/s 00:00

etcd 100% 2073 2.0KB/s 00:00

reconfDocker.sh 100% 2094 2.0KB/s 00:00

kube-apiserver 100% 58MB 58.2MB/s 00:00

kube-scheduler 100% 42MB 42.0MB/s 00:00

kube-controller-manager 100% 52MB 51.8MB/s 00:00

etcdctl 100% 12MB 12.3MB/s 00:00

etcd 100% 14MB 13.8MB/s 00:00

flanneld 100% 11MB 10.8MB/s 00:00

kubelet 100% 60MB 60.3MB/s 00:01

kube-proxy 100% 35MB 34.8MB/s 00:00

flanneld 100% 11MB 10.8MB/s 00:00

flanneld.conf 100% 577 0.6KB/s 00:00

flanneld 100% 2121 2.1KB/s 00:00

flanneld.conf 100% 568 0.6KB/s 00:00

flanneld 100% 2131 2.1KB/s 00:00

[sudo] password to start master: // I entered my password manually
etcd start/running, process 100639
Error: 100: Key not found (/coreos.com) [1]
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}
docker stop/waiting
docker start/running, process 101035
Connection to 192.168.245.244 closed.
Validating master
Validating kant@192.168.245.244
Using master 192.168.245.244
cluster "ubuntu" set.
user "ubuntu" set.
context "ubuntu" set.
switched to context "ubuntu".
Wrote config for ubuntu to /home/kant/.kube/config
... calling validate-cluster
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.

这是我在config-default.sh

中将debug标志设置为true时的错误跟踪
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
... Starting cluster using provider: ubuntu
... calling verify-prereqs
... calling kube-up
~/kubernetes/cluster/ubuntu ~/kubernetes/cluster
Prepare flannel 0.5.5 release ...
Prepare etcd 2.3.1 release ...
Prepare kubernetes 1.2.4 release ...
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory
~/kubernetes/cluster

Deploying master and node on machine 192.168.245.237
make-ca-cert.sh                                                                                 100% 4028     3.9KB/s   00:00    
easy-rsa.tar.gz                                                                                 100%   42KB  42.4KB/s   00:00    
config-default.sh                                                                               100% 5474     5.4KB/s   00:00    
util.sh                                                                                         100%   29KB  28.6KB/s   00:00    
kubelet.conf                                                                                    100%  644     0.6KB/s   00:00    
kube-proxy.conf                                                                                 100%  684     0.7KB/s   00:00    
kubelet                                                                                         100% 2158     2.1KB/s   00:00    
kube-proxy                                                                                      100% 2233     2.2KB/s   00:00    
kube-scheduler.conf                                                                             100%  674     0.7KB/s   00:00    
etcd.conf                                                                                       100%  709     0.7KB/s   00:00    
kube-controller-manager.conf                                                                    100%  744     0.7KB/s   00:00    
kube-apiserver.conf                                                                             100%  674     0.7KB/s   00:00    
kube-apiserver                                                                                  100% 2358     2.3KB/s   00:00    
kube-scheduler                                                                                  100% 2360     2.3KB/s   00:00    
kube-controller-manager                                                                         100% 2672     2.6KB/s   00:00    
etcd                                                                                            100% 2073     2.0KB/s   00:00    
reconfDocker.sh                                                                                 100% 2094     2.0KB/s   00:00    
kube-apiserver                                                                                  100%   58MB  58.2MB/s   00:01    
kube-scheduler                                                                                  100%   42MB  42.0MB/s   00:00    
kube-controller-manager                                                                         100%   52MB  51.8MB/s   00:00    
etcdctl                                                                                         100%   14MB  13.7MB/s   00:00    
etcd                                                                                            100%   16MB  15.9MB/s   00:00    
flanneld                                                                                        100%   16MB  15.8MB/s   00:00    
kubelet                                                                                         100%   60MB  60.3MB/s   00:01    
kube-proxy                                                                                      100%   35MB  34.8MB/s   00:00    
flanneld                                                                                        100%   16MB  15.8MB/s   00:00    
flanneld.conf                                                                                   100%  577     0.6KB/s   00:00    
flanneld                                                                                        100% 2121     2.1KB/s   00:00    
flanneld.conf                                                                                   100%  568     0.6KB/s   00:00    
flanneld                                                                                        100% 2131     2.1KB/s   00:00    
+ source /home/kant/kube/util.sh
++ set -e
++ SSH_OPTS='-oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=ERROR'
++ MASTER=
++ MASTER_IP=
++ NODE_IPS=
+ setClusterInfo
+ NODE_IPS=
+ local ii=0
+ create-etcd-opts 192.168.245.237
+ cat
+ create-kube-apiserver-opts 192.168.3.0/24 NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota 30000-32767 192.168.245.237
+ cat
+ create-kube-controller-manager-opts 192.168.245.237
+ cat
+ create-kube-scheduler-opts
+ cat
+ create-kubelet-opts 192.168.245.237 192.168.245.237 192.168.3.10 cluster.local '' ''
+ '[' -n '' ']'
+ cni_opts=
+ cat
+ create-kube-proxy-opts 192.168.245.237 192.168.245.237 ''
+ cat
+ create-flanneld-opts 127.0.0.1 192.168.245.237
+ cat
+ FLANNEL_OTHER_NET_CONFIG=
+ sudo -E -p '[sudo] password to start master: ' -- /bin/bash -ce ' 
      set -x
      cp ~/kube/default/* /etc/default/
      cp ~/kube/init_conf/* /etc/init/
      cp ~/kube/init_scripts/* /etc/init.d/

      groupadd -f -r kube-cert
       DEBUG=true ~/kube/make-ca-cert.sh "192.168.245.237" "IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"
      mkdir -p /opt/bin/
      cp ~/kube/master/* /opt/bin/
      cp ~/kube/minion/* /opt/bin/

      service etcd start
      if true; then FLANNEL_NET="172.16.0.0/16" KUBE_CONFIG_FILE="./../cluster/../cluster/ubuntu/config-default.sh" DOCKER_OPTS="" ~/kube/reconfDocker.sh ai; fi
      '
[sudo] password to start master: 
+ cp /home/kant/kube/default/etcd /home/kant/kube/default/flanneld /home/kant/kube/default/kube-apiserver /home/kant/kube/default/kube-controller-manager /home/kant/kube/default/kubelet /home/kant/kube/default/kube-proxy /home/kant/kube/default/kube-scheduler /etc/default/
+ cp /home/kant/kube/init_conf/etcd.conf /home/kant/kube/init_conf/flanneld.conf /home/kant/kube/init_conf/kube-apiserver.conf /home/kant/kube/init_conf/kube-controller-manager.conf /home/kant/kube/init_conf/kubelet.conf /home/kant/kube/init_conf/kube-proxy.conf /home/kant/kube/init_conf/kube-scheduler.conf /etc/init/
+ cp /home/kant/kube/init_scripts/etcd /home/kant/kube/init_scripts/flanneld /home/kant/kube/init_scripts/kube-apiserver /home/kant/kube/init_scripts/kube-controller-manager /home/kant/kube/init_scripts/kubelet /home/kant/kube/init_scripts/kube-proxy /home/kant/kube/init_scripts/kube-scheduler /etc/init.d/
+ groupadd -f -r kube-cert
+ DEBUG=true
+ /home/kant/kube/make-ca-cert.sh 192.168.245.237 IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local
+ cert_ip=192.168.245.237
+ extra_sans=IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local
+ cert_dir=/srv/kubernetes
+ cert_group=kube-cert
+ mkdir -p /srv/kubernetes
+ use_cn=false
+ '[' 192.168.245.237 == _use_gce_external_ip_ ']'
+ '[' 192.168.245.237 == _use_aws_external_ip_ ']'
+ sans=IP:192.168.245.237
+ [[ -n IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local ]]
+ sans=IP:192.168.245.237,IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local
++ mktemp -d -t kubernetes_cacert.XXXXXX
+ tmpdir=/tmp/kubernetes_cacert.YAN8Jg
+ trap 'rm -rf "${tmpdir}"' EXIT
+ cd /tmp/kubernetes_cacert.YAN8Jg
+ '[' -f /home/kant/kube/easy-rsa.tar.gz ']'
+ ln -s /home/kant/kube/easy-rsa.tar.gz .
+ tar xzf easy-rsa.tar.gz
+ cd easy-rsa-master/easyrsa3
+ ./easyrsa init-pki
++ date +%s
+ ./easyrsa --batch --req-cn=192.168.245.237@1465788589 build-ca nopass
+ '[' false = true ']'
+ ./easyrsa --subject-alt-name=IP:192.168.245.237,IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local build-server-full kubernetes-master nopass
+ cp -p pki/issued/kubernetes-master.crt /srv/kubernetes/server.cert
+ cp -p pki/private/kubernetes-master.key /srv/kubernetes/server.key
+ ./easyrsa build-client-full kubecfg nopass
+ cp -p pki/ca.crt /srv/kubernetes/ca.crt
+ cp -p pki/issued/kubecfg.crt /srv/kubernetes/kubecfg.crt
+ cp -p pki/private/kubecfg.key /srv/kubernetes/kubecfg.key
+ chgrp kube-cert /srv/kubernetes/server.key /srv/kubernetes/server.cert /srv/kubernetes/ca.crt
+ chmod 660 /srv/kubernetes/server.key /srv/kubernetes/server.cert /srv/kubernetes/ca.crt
+ rm -rf /tmp/kubernetes_cacert.YAN8Jg
+ mkdir -p /opt/bin/
+ cp /home/kant/kube/master/etcd /home/kant/kube/master/etcdctl /home/kant/kube/master/flanneld /home/kant/kube/master/kube-apiserver /home/kant/kube/master/kube-controller-manager /home/kant/kube/master/kube-scheduler /opt/bin/
+ cp /home/kant/kube/minion/flanneld /home/kant/kube/minion/kubelet /home/kant/kube/minion/kube-proxy /opt/bin/
+ service etcd start
etcd start/running, process 74611
+ true
+ FLANNEL_NET=172.16.0.0/16
+ KUBE_CONFIG_FILE=./../cluster/../cluster/ubuntu/config-default.sh
+ DOCKER_OPTS=
+ /home/kant/kube/reconfDocker.sh ai
Error:  100: Key not found (/coreos.com) [1]
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}}
docker stop/waiting
docker start/running, process 75022
Connection to 192.168.245.237 closed.
Validating master
Validating kant@192.168.245.237
Using master 192.168.245.237
cluster "ubuntu" set.
user "ubuntu" set.
context "ubuntu" set.
switched to context "ubuntu".
Wrote config for ubuntu to /home/kant/.kube/config
... calling validate-cluster
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying.

1 个答案:

答案 0 :(得分:0)

看起来你在config-default.sh中有错误的配置。 如果要在节点(同时包含master和worker)中部署本地群集,则可以使用以下命令配置文件config-default.sh:

roles=${roles:-"ai"}

export NUM_NODES=${NUM_NODES:-1}

NUM_NODES 的值是 角色 i 的数量