我在k8s上设置了一堆容器。每个吊舱运行一个容器。有一个反向代理容器,它在运行时容器中调用服务。我已经设置了两个运行时Pod v1和v2。我的目标是使用istio将所有流量从反向代理Pod路由到运行时Pod v1。
我已经配置了istio,下面的屏幕快照将使您对环境有所了解。 [![在此处输入图片描述] [1]] [1]
我的k8s yaml看起来像这样:
#Assumes create-docker-store-secret.sh used to create dockerlogin secret
#Assumes create-secrets.sh used to create key file, sam admin, and cfgsvc secrets
apiVersion: storage.k8s.io/v1beta1
# Create StorageClass with gidallocate=true to allow non-root user access to mount
# This is used by PostgreSQL container
kind: StorageClass
metadata:
name: ibmc-file-bronze-gid
labels:
kubernetes.io/cluster-service: "true"
provisioner: ibm.io/ibmc-file
parameters:
type: "Endurance"
iopsPerGB: "2"
sizeRange: "[1-12000]Gi"
mountOptions: nfsvers=4.1,hard
billingType: "hourly"
reclaimPolicy: "Delete"
classVersion: "2"
gidAllocate: "true"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldaplib
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapslapd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ldapsecauthority
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresqldata
spec:
storageClassName: ibmc-file-bronze-gid
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: isamconfig
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50M
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openldap
labels:
app: openldap
spec:
selector:
matchLabels:
app: openldap
replicas: 1
template:
metadata:
labels:
app: openldap
spec:
volumes:
- name: ldaplib
persistentVolumeClaim:
claimName: ldaplib
- name: ldapslapd
persistentVolumeClaim:
claimName: ldapslapd
- name: ldapsecauthority
persistentVolumeClaim:
claimName: ldapsecauthority
- name: openldap-keys
secret:
secretName: openldap-keys
containers:
- name: openldap
image: ibmcom/isam-openldap:9.0.7.0
ports:
- containerPort: 636
env:
- name: LDAP_DOMAIN
value: ibm.com
- name: LDAP_ADMIN_PASSWORD
value: Passw0rd
- name: LDAP_CONFIG_PASSWORD
value: Passw0rd
volumeMounts:
- mountPath: /var/lib/ldap
name: ldaplib
- mountPath: /etc/ldap/slapd.d
name: ldapslapd
- mountPath: /var/lib/ldap.secAuthority
name: ldapsecauthority
- mountPath: /container/service/slapd/assets/certs
name: openldap-keys
# This line is needed when running on Kubernetes 1.9.4 or above
args: [ "--copy-service"]
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
# Just this line to get debug output from openldap startup
# args: [ "--loglevel" , "trace","--copy-service"]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: openldap
labels:
app: openldap
spec:
ports:
- port: 636
name: ldaps
protocol: TCP
selector:
app: openldap
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
selector:
matchLabels:
app: postgresql
replicas: 1
template:
metadata:
labels:
app: postgresql
spec:
securityContext:
runAsNonRoot: true
runAsUser: 70
fsGroup: 0
volumes:
- name: postgresqldata
persistentVolumeClaim:
claimName: postgresqldata
- name: postgresql-keys
secret:
secretName: postgresql-keys
containers:
- name: postgresql
image: ibmcom/isam-postgresql:9.0.7.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Passw0rd
- name: POSTGRES_DB
value: isam
- name: POSTGRES_SSL_KEYDB
value: /var/local/server.pem
- name: PGDATA
value: /var/lib/postgresql/data/db-files/
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresqldata
- mountPath: /var/local
name: postgresql-keys
# useful for debugging startup issues - can run bash, then exec to the container and poke around
# command: [ "/bin/bash"]
# args: [ "-c", "while /bin/true ; do sleep 5; done" ]
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
name: postgresql
protocol: TCP
selector:
app: postgresql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamconfig
labels:
app: isamconfig
spec:
selector:
matchLabels:
app: isamconfig
replicas: 1
template:
metadata:
labels:
app: isamconfig
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
persistentVolumeClaim:
claimName: isamconfig
- name: isamconfig-logs
emptyDir: {}
containers:
- name: isamconfig
image: ibmcom/isam:9.0.7.1_IF4
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamconfig-logs
env:
- name: SERVICE
value: config
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: ADMIN_PWD
valueFrom:
secretKeyRef:
name: samadmin
key: adminpw
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 120
periodSeconds: 20
# command: [ "/sbin/bootstrap.sh" ]
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamconfig
spec:
# To make the LMI internet facing, make it a NodePort
type: NodePort
ports:
- port: 9443
name: isamconfig
protocol: TCP
# make this one statically allocated
nodePort: 30442
selector:
app: isamconfig
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrprp1
labels:
app: isamwrprp1
spec:
selector:
matchLabels:
app: isamwrprp1
replicas: 1
template:
metadata:
labels:
app: isamwrprp1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrprp1-logs
emptyDir: {}
containers:
- name: isamwrprp1
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrprp1-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: rp1
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrprp1
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrprp1
protocol: TCP
nodePort: 30443
selector:
app: isamwrprp1
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamwrpmobile
labels:
app: isamwrpmobile
spec:
selector:
matchLabels:
app: isamwrpmobile
replicas: 1
template:
metadata:
labels:
app: isamwrpmobile
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamwrpmobile-logs
emptyDir: {}
containers:
- name: isamwrpmobile
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamwrpmobile-logs
env:
- name: SERVICE
value: webseal
- name: INSTANCE
value: mobile
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
# for external service access, see https://console.bluemix.net/docs/containers/cs_apps.html#cs_apps_public_nodeport
apiVersion: v1
kind: Service
metadata:
name: isamwrpmobile
spec:
type: NodePort
sessionAffinity: ClientIP
ports:
- port: 443
name: isamwrpmobile
protocol: TCP
nodePort: 30444
selector:
app: isamwrpmobile
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v1
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v1
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v1
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: isamruntime-v2
labels:
app: isamruntime
spec:
selector:
matchLabels:
app: isamruntime
version: v2
replicas: 1
template:
metadata:
labels:
app: isamruntime
version: v2
spec:
securityContext:
runAsNonRoot: true
runAsUser: 6000
volumes:
- name: isamconfig
emptyDir: {}
- name: isamruntime-logs
emptyDir: {}
containers:
- name: isamruntime
image: ibmcom/isam:9.0.7.1_IF4
ports:
- containerPort: 443
volumeMounts:
- mountPath: /var/shared
name: isamconfig
- mountPath: /var/application.logs
name: isamruntime-logs
env:
- name: SERVICE
value: runtime
- name: CONTAINER_TIMEZONE
value: Europe/London
- name: AUTO_RELOAD_FREQUENCY
value: "5"
- name: CONFIG_SERVICE_URL
value: https://isamconfig:9443/shared_volume
- name: CONFIG_SERVICE_USER_NAME
value: cfgsvc
- name: CONFIG_SERVICE_USER_PWD
valueFrom:
secretKeyRef:
name: configreader
key: cfgsvcpw
livenessProbe:
exec:
command:
- /sbin/health_check.sh
- livenessProbe
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
readinessProbe:
exec:
command:
- /sbin/health_check.sh
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 2
imagePullSecrets:
- name: dockerlogin
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: isamruntime
protocol: TCP
selector:
app: isamruntime
---
我有一个网关yaml文件,如下所示:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: isamruntime-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
serverCertificate: /tmp/tls.crt
privateKey: /tmp/tls.key
---
我的路由Yaml文件如下所示:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime
spec:
host: isamruntime
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
流从Postman工具->入口ip地址->运行反向代理的容器->运行时容器 我的目标是确保只有运行时v1容器上的容器才能获得流量。但是,流量会同时路由到v1和v2。
我的错是什么?有人可以帮我吗?
问候 普拉南
我尝试了以下操作,但没有成功。流量被路由到v1和v2。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
- destination:
host: isamruntime
subset: v2
port:
number: 443
weight: 0
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime-v1
spec:
host: isamruntime
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
我尝试将虚拟服务更改为:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime.com
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
- destination:
host: isamruntime
subset: v2
port:
number: 443
weight: 0
---
然后按如下所示使用卷曲
pranam@UNKNOWN kubernetes % curl -k -v -H "host: isamruntime.com" https://169.50.228.2:30443
* Trying 169.50.228.2...
* TCP_NODELAY set
* Connected to 169.50.228.2 (169.50.228.2) port 30443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; O=Policy Director; CN=isamconfig
* start date: Feb 18 15:33:30 2018 GMT
* expire date: Feb 14 15:33:30 2038 GMT
* issuer: C=US; O=Policy Director; CN=isamconfig
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: isamruntime.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 13104
< content-type: text/html
< date: Fri, 10 Jul 2020 13:45:28 GMT
< p3p: CP="NON CUR OTPi OUR NOR UNI"
< server: WebSEAL/9.0.7.1
< x-frame-options: DENY
< x-content-type-options: nosniff
< cache-control: no-store
< x-xss-protection: 1
< content-security-policy: frame-ancestors 'none'
< strict-transport-security: max-age=31536000; includeSubDomains
< pragma: no-cache
< Set-Cookie: PD-S-SESSION-ID=1_2_0_cGgEZiwrYKP0QtvDtZDa4l7-iPb6M3ZsW4I+aeUhn9HuAfAd; Path=/; Secure; HttpOnly
<
<!DOCTYPE html>
<!-- Copyright (C) 2015 IBM Corporation -->
<!-- Copyright (C) 2000 Tivoli Systems, Inc. -->
<!-- Copyright (C) 1999 IBM Corporation -->
<!-- Copyright (C) 1998 Dascom, Inc. -->
<!-- All Rights Reserved. -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>LoginPage</title>
<style>
curl命令返回预期的反向代理的登录页面。我的运行时服务在反向代理后面。反向代理将调用运行时服务。我在文档中的某处看到可以使用-mesh。那也无济于事。
我运行了另一个curl命令,该命令实际上触发了对反向代理的调用,并且反向代理调用了运行时。
curl -k -v -H "host: isamruntime.com" https://169.50.228.2:30443/mga/sps/oauth/oauth20/token
* Trying 169.50.228.2...
* TCP_NODELAY set
* Connected to 169.50.228.2 (169.50.228.2) port 30443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; O=Policy Director; CN=isamconfig
* start date: Feb 18 15:33:30 2018 GMT
* expire date: Feb 14 15:33:30 2038 GMT
* issuer: C=US; O=Policy Director; CN=isamconfig
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /mga/sps/oauth/oauth20/token HTTP/1.1
> Host: isamruntime.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< content-language: en-US
< content-type: application/json;charset=UTF-8
< date: Fri, 10 Jul 2020 13:56:32 GMT
< p3p: CP="NON CUR OTPi OUR NOR UNI"
< transfer-encoding: chunked
< x-frame-options: SAMEORIGIN
< cache-control: no-store, no-cache=set-cookie
< expires: Thu, 01 Dec 1994 16:00:00 GMT
< strict-transport-security: max-age=31536000; includeSubDomains
< pragma: no-cache
< Set-Cookie: AMWEBJCT!%2Fmga!JSESSIONID=00004EKuX3PlcIBBhcwGnKf50ac:9e48435e-a71f-4b8a-8fb6-ef95c5f36c51; Path=/; Secure; HttpOnly
< Set-Cookie: PD_STATEFUL_c728ed2e-159a-11e8-b9c9-0242ac120004=%2Fmga; Path=/
< Set-Cookie: PD-S-SESSION-ID=1_2_0_6kSM-YBjsgCZnwNGOCOvjA+C9KBhYXlKkyuWUKpZ7RnCKVcy; Path=/; Secure; HttpOnly
<
* Connection #0 to host 169.50.228.2 left intact
{"error_description":"FBTOAU232E The client MUST use the HTTP POST method when making access token requests.","error":"invalid_request"}* Closing connection 0
预计会出现错误,因为这是仅允许HTTP POST的终点。 [1]:https://i.stack.imgur.com/dOMnD.png
答案 0 :(得分:1)
流量同时路由到v1和v2
这很可能意味着Istio没有处理流量,而K8s Service正在做简单的循环。
我认为您在Debugging Istio: How to Fix a Broken Service Mesh (Cloud Next '19)会议中看到了确切的情况。
这对于了解istioctl
的功能和调试意外行为是非常有用的会话,但是长话短说,您需要调整服务定义。
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: http-isamruntime # Add prefix of http
protocol: TCP
selector:
app: isamruntime
参考:https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
注意:上面的http-
前缀假定您在打服务之前要终止TLS。根据您的用例,您可能还需要调整VirtualService。
答案 1 :(得分:1)
我的流程正常了。我不需要网关,因为我的流量来自反向代理->运行时。反向代理和运行时位于k8s cluser内部,是东西向流量。我的服务需要tcp-,而我的虚拟服务需要tcp映射。 yaml文件在下面给出。我感谢大家为我指引正确的方向。 我的服务Yaml:
---
apiVersion: v1
kind: Service
metadata:
name: isamruntime
spec:
ports:
- port: 443
name: tcp-isamruntime # Add prefix of tcp to match traffic type
protocol: TCP
selector:
app: isamruntime
我的虚拟服务和目标规则Yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
tcp:
- match:
- port: 443
route:
- destination:
host: isamruntime.default.svc.cluster.local
port:
number: 443
subset: v1
weight: 0
- destination:
host: isamruntime.default.svc.cluster.local
port:
number: 443
subset: v2
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime
spec:
host: isamruntime.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
谢谢
答案 2 :(得分:0)
jt97
感谢您查看问题。我尝试使用此方法提出您的建议:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: isamruntime
spec:
hosts:
- isamruntime
gateways:
- isamruntime-gateway
http:
- route:
- destination:
host: isamruntime
subset: v1
port:
number: 443
weight: 100
- destination:
host: isamruntime
subset: v2
port:
number: 443
weight: 0
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime-v1
spec:
host: isamruntime
subsets:
- name: v1
labels:
version: v1
# - name: v2
# labels:
# version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: isamruntime-v2
spec:
host: isamruntime
subsets:
- name: v2
labels:
version: v2
# - name: v2
# labels:
# version: v2
但是,它没有用。
与主机名有关。它是否必须具有-isamruntime.default.svc.cluster.local之类的名称空间,还是我的容器应在非默认名称空间中运行?
问候 普拉南