在kubernetes中启用Dualstack时kubelet和kube-proxy崩溃

时间:2020-04-09 09:46:52

标签: kubernetes

我试图在Kubernetes中启用双栈。我在不同版本的Kubernetes中面临以下问题 1. Kubernetes版本1.16.1 在此版本中,我初始化了Kubernetes集群,但未启用双栈,后来我要求在Kubernetes中初始化双栈。
根据kubernetes的文档,我已经在kubelet配置文件(/var/lib/kubelet/config.yml)中使用选项

启用了双堆栈
featureGates:
    IPv6DualStack: true

并重新启动kubelet服务,它开始崩溃粘贴在下面的日志

Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Apr  8 05:33:40 ip-172-31-26-126 systemd[1]: Started Kubernetes systemd probe.
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.095968    1760 server.go:410] Version: v1.16.1
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.096412    1760 plugins.go:100] No cloud provider specified.
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.096613    1760 server.go:773] Client rotation is on, will bootstrap in background
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.099775    1760 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164019    1760 server.go:644] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164343    1760 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: []
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164356    1760 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164447    1760 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164452    1760 container_manager_linux.go:305] Creating device plugin manager: true
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164470    1760 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} <nil> {{} [0 0 0]} 0x1b6b160 0x799f338 0x1b6bb60 map[] map[] map[] map[] map[] 0xc00060f980 [0] 0x799f338}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164493    1760 state_mem.go:36] [cpumanager] initializing new in-memory state store
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.164574    1760 state_mem.go:84] [cpumanager] updated default cpuset: ""
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.166243    1760 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.166263    1760 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider:  &{{0 0} 0x799f338 10000000000 0xc0005ab140 <nil> <nil> <nil> <nil> map[memory:{{104857600 0} {<nil>}  BinarySI}]}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.166726    1760 kubelet.go:287] Adding pod path: /etc/kubernetes/manifests
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.166780    1760 kubelet.go:312] Watching apiserver
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.189840    1760 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.190090    1760 client.go:104] Start docker client with request timeout=2m0s
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: W0408 05:33:40.200047    1760 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.200065    1760 docker_service.go:240] Hairpin mode set to "hairpin-veth"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.234874    1760 docker_service.go:255] Docker cri networking managed by cni
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.243874    1760 docker_service.go:260] Docker Info: &{ID:WR2U:QQ5H:XSCY:IBHE:WLGL:6PCN:NCHC:MONB:B366:F4D5:FRTM:2SSN Containers:21 ContainersRunning:18 ContainersPaused:0 ContainersStopped:3 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem <unknown>] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:116 OomKillDisable:true NGoroutines:108 SystemTime:2020-04-08T05:33:40.23615938Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.4.0-1101-aws OperatingSystem:Ubuntu 16.04.6 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007ce770 NCPU:2 MemTotal:4080873472 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-26-126 Labels:[] ExperimentalBuild:false ServerVersion:19.03.8 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.244886    1760 docker_service.go:273] Setting cgroupDriver to cgroupfs
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259357    1760 remote_runtime.go:59] parsed scheme: ""
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259375    1760 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259408    1760 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0  <nil>}] <nil>}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259417    1760 clientconn.go:577] ClientConn switching balancer to "pick_first"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259442    1760 remote_image.go:50] parsed scheme: ""
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259450    1760 remote_image.go:50] scheme "" not registered, fallback to default scheme
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259462    1760 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0  <nil>}] <nil>}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.259469    1760 clientconn.go:577] ClientConn switching balancer to "pick_first"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: E0408 05:33:40.262374    1760 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.273421    1760 kuberuntime_manager.go:207] Container runtime docker initialized, version: 19.03.8, apiVersion: 1.40.0
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.280326    1760 server.go:1065] Started kubelet
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.280659    1760 server.go:145] Starting to listen on 0.0.0.0:10250
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.281577    1760 server.go:354] Adding debug handlers to kubelet server.
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: E0408 05:33:40.282845    1760 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.291220    1760 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.291521    1760 status_manager.go:156] Starting to sync pod status with apiserver
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.291767    1760 kubelet.go:1822] Starting kubelet main sync loop.
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.291992    1760 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.296915    1760 volume_manager.go:249] Starting Kubelet Volume Manager
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.297422    1760 desired_state_of_world_populator.go:131] Desired state populator starts to run
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.323140    1760 clientconn.go:104] parsed scheme: "unix"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.323518    1760 clientconn.go:104] scheme "unix" not registered, fallback to default scheme
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.323834    1760 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: I0408 05:33:40.324091    1760 clientconn.go:577] ClientConn switching balancer to "pick_first"
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: panic: runtime error: index out of range
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: goroutine 471 [running]:
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/pkg/kubelet/dockershim/network/cni.(*cniNetworkPlugin).GetPodNetworkStatus(0xc000844a50, 0xc000f37ca2, 0xb, 0xc000f37c89, 0x18, 0x42ee0bb, 0x6, 0xc000f96700, 0x40, 0xc000d9f7d0, ...)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/network/cni/cni_others.go:78 +0x420
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/pkg/kubelet/dockershim/network.(*PluginManager).GetPodNetworkStatus(0xc0005823e0, 0xc000f37ca2, 0xb, 0xc000f37c89, 0x18, 0x42ee0bb, 0x6, 0xc000f96700, 0x40, 0x0, ...)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/network/plugins.go:391 +0x1f9
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).getIPsFromPlugin(0xc00044f130, 0xc00099ca80, 0x40, 0x78c0000, 0x7981100, 0x0, 0x0)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_sandbox.go:335 +0x1c3
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).getIPs(0xc00044f130, 0xc000f966c0, 0x40, 0xc00099ca80, 0x15333aae, 0xed61f5371, 0x0)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_sandbox.go:373 +0xe3
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).PodSandboxStatus(0xc00044f130, 0x4ad8a20, 0xc00099ca50, 0xc000ff81a0, 0xc00044f130, 0xc00099ca50, 0xc000ad0bd0)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_sandbox.go:439 +0x133
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/vendor/k8s.io/cri-api/pkg/apis/runtime/v1alpha2._RuntimeService_PodSandboxStatus_Handler(0x42c4ec0, 0xc00044f130, 0x4ad8a20, 0xc00099ca50, 0xc0000c8f60, 0x0, 0x4ad8a20, 0xc00099ca50, 0xc000f9e960, 0x42)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.pb.go:7663 +0x23e
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc00002d760, 0x4b45120, 0xc000650d80, 0xc000f31a00, 0xc000585680, 0x78c87c0, 0x0, 0x0, 0x0)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:995 +0x466
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc00002d760, 0x4b45120, 0xc000650d80, 0xc000f31a00, 0x0)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:1275 +0xda6
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00058fbb0, 0xc00002d760, 0x4b45120, 0xc000650d80, 0xc000f31a00)
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:710 +0x9f
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
Apr  8 05:33:40 ip-172-31-26-126 kubelet[1760]: #011/workspace/anago-v1.16.1-beta.0.37+d647ddbd755faf/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:708 +0xa1
Apr  8 05:33:40 ip-172-31-26-126 systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Apr  8 05:33:40 ip-172-31-26-126 systemd[1]: kubelet.service: Unit entered failed state.
Apr  8 05:33:40 ip-172-31-26-126 systemd[1]: kubelet.service: Failed with result 'exit-code
  1. Kubernetes版本:1.18.0 Kubernetes再次按照上述案例进行了初始化,这次kubelet配置正常工作,但是kube-proxy崩溃了 kube-proxy的配置图
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.112.0.0/12,fc00::/24
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    featureGates:
      IPv6DualStack: true
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    proxyMode: "ipvs"
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
  kubeconfig.conf: |-
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://172.31.17.235:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  creationTimestamp: "2020-04-08T07:23:23Z"
  labels:
    app: kube-proxy
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:config.conf: {}
        f:kubeconfig.conf: {}
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
    manager: kubeadm
    operation: Update
    time: "2020-04-08T07:23:23Z"
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "209"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kube-proxy

它开始崩溃并显示以下日志

I0408 07:47:04.322813       1 node.go:136] Successfully retrieved node IP: 172.31.17.235
I0408 07:47:04.322844       1 server_others.go:186] Using iptables Proxier.
I0408 07:47:04.322858       1 server_others.go:193] creating dualStackProxier for iptables.
I0408 07:47:04.324233       1 server.go:583] Version: v1.18.0
I0408 07:47:04.324665       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0408 07:47:04.325256       1 config.go:315] Starting service config controller
I0408 07:47:04.325270       1 shared_informer.go:223] Waiting for caches to sync for service config
I0408 07:47:04.325287       1 config.go:133] Starting endpoints config controller
I0408 07:47:04.325296       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
E0408 07:47:04.335792       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

谁能帮我解决这个问题

谢谢

0 个答案:

没有答案