为什么Istio“身份验证策略”示例页面无法正常工作?

时间:2018-08-15 12:08:48

标签: kubernetes istio

此处的文章:https://istio.io/docs/tasks/security/authn-policy/ 具体来说,当我按照Setup部分的说明进行操作时,无法连接命名空间httpbinfoo中的任何bar。但是legacy是可以的。我希望正在安装的辅助汽车代理出现问题。

这是httpbin pod yaml文件的输出(在注入istioctl kubeinject --includeIPRanges "10.32.0.0/16"命令之后)。我使用--includeIPRanges,以便Pod可以与外部ip通信(出于调试目的,安装dnsutils等软件包)

apiVersion: v1
kind: Pod
metadata:
  annotations:
    sidecar.istio.io/inject: "true"
    sidecar.istio.io/status: '{"version":"4120ea817406fd7ed43b7ecf3f2e22abe453c44d3919389dcaff79b210c4cd86","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
  creationTimestamp: 2018-08-15T11:40:59Z
  generateName: httpbin-8b9cf99f5-
  labels:
    app: httpbin
    pod-template-hash: "465795591"
    version: v1
  name: httpbin-8b9cf99f5-9c47z
  namespace: foo
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: httpbin-8b9cf99f5
    uid: 1450d75d-a080-11e8-aece-42010a940168
  resourceVersion: "65722138"
  selfLink: /api/v1/namespaces/foo/pods/httpbin-8b9cf99f5-9c47z
  uid: 1454b68d-a080-11e8-aece-42010a940168
spec:
  containers:
  - image: docker.io/citizenstig/httpbin
    imagePullPolicy: IfNotPresent
    name: httpbin
    ports:
    - containerPort: 8000
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-pkpvf
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - httpbin
    - --drainDuration
    - 45s
    - --parentShutdownDuration
    - 1m0s
    - --discoveryAddress
    - istio-pilot.istio-system:15007
    - --discoveryRefreshDelay
    - 1s
    - --zipkinAddress
    - zipkin.istio-system:9411
    - --connectTimeout
    - 10s
    - --statsdUdpAddress
    - istio-statsd-prom-bridge.istio-system.istio-system:9125
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    image: docker.io/istio/proxyv2:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    resources:
      requests:
        cpu: 10m
    securityContext:
      privileged: false
      readOnlyRootFilesystem: true
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - args:
    - -p
    - "15001"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - 10.32.0.0/16
    - -x
    - ""
    - -b
    - 8000,
    - -d
    - ""
    image: docker.io/istio/proxy_init:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  nodeName: gke-tvlk-data-dev-default-medium-pool-46397778-q2sb
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-pkpvf
    secret:
      defaultMode: 420
      secretName: default-token-pkpvf
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:41:01Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:44:28Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:40:59Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://758e130a4c31a15c1b8bc1e1f72bd7739d5fa1103132861eea9ae1a6ae1f080e
    image: citizenstig/httpbin:latest
    imageID: docker-pullable://citizenstig/httpbin@sha256:b81c818ccb8668575eb3771de2f72f8a5530b515365842ad374db76ad8bcf875
    lastState: {}
    name: httpbin
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-08-15T11:41:01Z
  - containerID: docker://9c78eac46a99457f628493975f5b0c5bbffa1dac96dab5521d2efe4143219575
    image: istio/proxyv2:1.0.0
    imageID: docker-pullable://istio/proxyv2@sha256:77915a0b8c88cce11f04caf88c9ee30300d5ba1fe13146ad5ece9abf8826204c
    lastState:
      terminated:
        containerID: docker://52299a80a0fa8949578397357861a9066ab0148ac8771058b83e4c59e422a029
        exitCode: 255
        finishedAt: 2018-08-15T11:44:27Z
        reason: Error
        startedAt: 2018-08-15T11:41:02Z
    name: istio-proxy
    ready: true
    restartCount: 1
    state:
      running:
        startedAt: 2018-08-15T11:44:28Z
  hostIP: 10.32.96.27
  initContainerStatuses:
  - containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
    image: istio/proxy_init:1.0.0
    imageID: docker-pullable://istio/proxy_init@sha256:345c40053b53b7cc70d12fb94379e5aa0befd979a99db80833cde671bd1f9fad
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
        exitCode: 0
        finishedAt: 2018-08-15T11:41:00Z
        reason: Completed
        startedAt: 2018-08-15T11:41:00Z
  phase: Running
  podIP: 10.32.19.61
  qosClass: Burstable
  startTime: 2018-08-15T11:40:59Z

这是我收到错误sleep.legacy-> httpbin.foo

时的示例命令
> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"

000
command terminated with exit code 7

**当我获得成功状态时,这是示例命令:sleep.legacy-> httpbin.legacy **

> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -csleep -n legacy -- curl http://httpbin.legacy:8000/ip -s -o /dev/null -w "%{http_code}\n"

200

我已按照说明进行操作,以确保未定义任何mtls策略,等等。

> kubectl get policies.authentication.istio.io --all-namespaces
No resources found.
> kubectl get meshpolicies.authentication.istio.io
No resources found.
> kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
host: istio-policy.istio-system.svc.cluster.local
host: istio-telemetry.istio-system.svc.cluster.local

1 个答案:

答案 0 :(得分:0)

NVM,我想我找到了原因。我的部分配置被弄乱了。 如果您查看statsd地址,它是使用无法识别的主机名 istio-statsd-prom-bridge.istio-system.istio-system:9125 定义的。我注意到在查看多次重新启动/崩溃的代理容器之后。