bash镜像名称:ikubernetes/demoapp
基本介绍
listening on 0.0.0.0:80, "-p|--port
可用的URL
GET /
GET /hostname
GET /user-agent
GET /configs
GET and POST /livez,POST请求可接收livez参数
GET and POST /readyz,POST请求可接收readyz参数
环境变量
DEPLOYENV and RELEASE.
HOST:监听的地址,默认为0.0.0.0
PORT:监听的端口,默认为0.0.0.0
目前支持三个版本,v1.0 v1.1 v1.2
bash镜像名称:ikubernetes/admin-box
类似于busybox一样的一个测试的盒子,主要用于网络测试。
安装了 bash curl bind-tools iproute2 iptables ipvsadm tcpdump nmap netcat-openbsd openssl等命令。
目前支持三个版本,v1.0 v1.1 v1.2
运行命令:kubectl run -it client --image=ikubernetes/admin-box --restart=Never --rm --command -- /bin/sh
bash镜像名称:ikubernetes/proxy
前端模拟镜像,支持env传入后端的地址,监听的端口为8080.
- env:
- name: PROXYURL
value: http://demoapp3:8080
目前支持三个版本,v0.1.0,v0.1.1
bash# 开启一个web server
docker run -p 8080:8080 -p 8079:8079 fortio/fortio server &
# 直接压测
docker run fortio/fortio load -c 5 -n 20 -qps 0 http://www.baidu.com
fortio load -c 5 -n 20 -qps 0 http://www.baidu.com
-c 表示并发数
-n 一共多少请求
-qps 每秒查询数,0 表示不限制
# https://www.shangmayuan.com/a/a31de356bca244008921893b.html
yaml# cat deploy-demoapp-v10.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demoappv10
version: v1.0
name: demoappv10
spec:
progressDeadlineSeconds: 600
replicas: 3
selector:
matchLabels:
app: demoapp
version: v1.0
template:
metadata:
labels:
app: demoapp
version: v1.0
spec:
containers:
- image: ikubernetes/demoapp:v1.0
imagePullPolicy: IfNotPresent
name: demoapp
env:
- name: "PORT"
value: "8080"
ports:
- containerPort: 8080
name: web
protocol: TCP
resources:
limits:
cpu: 50m
--- # svc可选
apiVersion: v1
kind: Service
metadata:
name: demoappv10
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demoapp
version: v1.0
type: ClusterIP
bashkubectl run -it client2 --image=ikubernetes/admin-box --restart=Never --rm --command -- /bin/sh
连通性测试:
yaml#cat deploy-demoapp-v11.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demoappv11
version: v1.1
name: demoappv11
spec:
progressDeadlineSeconds: 600
replicas: 2
selector:
matchLabels:
app: demoapp
version: v1.1
template:
metadata:
labels:
app: demoapp
version: v1.1
spec:
containers:
- image: ikubernetes/demoapp:v1.1
imagePullPolicy: IfNotPresent
name: demoapp
env:
- name: "PORT"
value: "8080"
ports:
- containerPort: 8080
name: web
protocol: TCP
resources:
limits:
cpu: 50m
--- # svc可选
apiVersion: v1
kind: Service
metadata:
name: demoappv11
spec:
ports:
- name: http-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demoapp
version: v1.1
type: ClusterIP
---
yamlapiVersion: v1
kind: Service
metadata:
name: demoapp3
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demoapp1
type: ClusterIP
yaml# cat service-demoapp.yaml
## 此处定义一个svc将会关联v10和v11两个版本。
---
apiVersion: v1
kind: Service
metadata:
name: demoapp
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demoapp
type: ClusterIP
此时调度为v10和v11 之间的轮询
bash# while true; do curl proxy;sleep 0.$RANDOM;done
yaml# cat deploy-proxy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
app: proxy
template:
metadata:
labels:
app: proxy
spec:
containers:
- env:
- name: PROXYURL
value: http://demoapp:8080 #此处配置svc地址,模拟前端通过svc+port来访问后台服务。
image: ikubernetes/proxy:v0.1.1
imagePullPolicy: IfNotPresent
name: proxy
ports:
- containerPort: 8080
name: web
protocol: TCP
resources:
limits:
cpu: 50m
--- #proxy的svc
apiVersion: v1
kind: Service
metadata:
name: proxy
spec:
ports:
- name: http-80
port: 80
protocol: TCP
targetPort: 8080
selector:
app: proxy
---
此时架构图:
说明:这里配置的svc对于istio来说,其作用只是负责创建一个规则,最终不会通过svc来转发流量。
yaml# cat virutalservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp #此处这个名字需要找到对应的cluster,才能把流量代理过去。
http:
- name: canary
match:
- uri:
prefix: /camera360 #uri为camera360的,路由到host为: demoappv11.default.svc.cluster.local的cluster
rewrite:
uri: /
route:
- destination:
host: demoappv11.default.svc.cluster.local
- name: default #其他的代理到默认cluster,v10版本
route:
- destination:
host: demoappv10.default.svc.cluster.local
测试结果
此处配置了一个VirtualService,使用命令kubectl get vs
可以查看。VirtualService是istio自定义资源,我们可以配置它实现高级路由功能。
bash# kubectl get vs
NAME GATEWAYS HOSTS AGE
demoapp ["demoapp"] 11m
此处需要配置DestinationRule
给后端服务器划分子集
yaml# cat destinationrule-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demoapp
spec:
host: demoapp
subsets:
- name: v10
labels:
version: v1.0 #匹配pod标签为1.0的划分给v10
- name: v11
labels:
version: v1.1 #匹配pod标签为1.1的划分给v11
集群查看命令istioctl pc cluster demoappv11-7984f579f5-cnlns
,可以查看置顶的pod能读取到的cluster信息。
子集划分完成,可以删除v10和v11的svc,istio也会自动设别删除对应的配置。
yaml# cat virutalservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp
http:
- name: canary
match:
- uri:
prefix: /camera360
rewrite:
uri: /
route:
- destination:
host: demoapp
subset: v10 # 指定子集
- name: default
route:
- destination:
host: demoapp
subset: v11 # 指定子集
## 为了看出区别,这里v10 和v11的顺序和上一个实验相反。
请求结果:
kiali结果
yaml# cat gateway-proxy.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: proxy-gateway
namespace: istio-system # 要指定为ingress gateway pod所在名称空间
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "istio-demo.camera360.com"
bash# kubectl get gw -n istio-system
NAME AGE
kiali-gateway 46h
proxy-gateway 28h
bash# cat virtualservice-proxy.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: proxy
spec:
hosts:
- "istio-demo.camera360.com" # 对应于gateways/proxy-gateway,需要和gateway的hosts对应
gateways: # 关联域名
- istio-system/proxy-gateway # 相关定义仅应用于Ingress Gateway上,需要指定gateway的名称,对应一致
#- mesh # 如果需要网格内部也可以访问,需要开启
http:
- name: default
route:
- destination:
host: proxy
bash# kubectl get vs
NAME GATEWAYS HOSTS AGE
demoapp ["demoapp"] 3h12m
proxy ["istio-system/proxy-gateway"] ["istio-demo.camera360.com"] 4m37s
此时需要将该域名解析到istio-ingressgateway的ip上就可以使用。
访问结果
清除之前的配置,为了让外网可以访问,我们需要添加一个网关kind: Gateway
。
bash# kubectl get vs
No resources found in default namespace.
# kubectl get gw -n istio-system
NAME AGE
kiali-gateway 2d1h #kiali的gateway,无影响。
bash# cat gateway-proxy.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: proxy-gateway
namespace: istio-system # 要指定为ingress gateway pod所在名称空间
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "istio-demo.camera360.com"
yaml# cat deploy-backend.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: backend
version: v3.6
name: backendv36
spec:
progressDeadlineSeconds: 600
replicas: 2
selector:
matchLabels:
app: backend
version: v3.6
template:
metadata:
creationTimestamp: null
labels:
app: backend
version: v3.6
spec:
containers:
- image: ikubernetes/gowebserver:v0.1.0
imagePullPolicy: IfNotPresent
name: gowebserver
env:
- name: "SERVICE_NAME"
value: "backend"
- name: "SERVICE_PORT"
value: "8082"
- name: "SERVICE_VERSION"
value: "v3.6"
ports:
- containerPort: 8082
name: web
protocol: TCP
resources:
limits:
cpu: 50m
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
ports:
- name: http-web
port: 8082
protocol: TCP
targetPort: 8082
selector:
app: backend
version: v3.6
yaml# cat virtualservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp
- "istio-demo.camera360.com"
gateways:
- istio-system/proxy-gateway # 相关定义仅应用于Ingress Gateway上
http:
- name: rewrite
match:
- uri:
prefix: /camera360
rewrite:
uri: /
route:
- destination:
host: demoapp
subset: v11
- name: redirect
match:
- uri:
prefix: "/backend"
redirect:
uri: /
authority: backend
port: 8082
- name: default
route:
- destination:
host: demoapp
subset: v10
#预期结果:
## curl -L istio-demo.camera360.com/backend => 重定向到单独的一个backend服务。
## curl istio-demo.camera360.com/camera360 => 代理到v11子集
## curl istio-demo.camera360.com => 默认请求到v10子集
##
配置apply成功之后的状态
bash# kubectl get vs
NAME GATEWAYS HOSTS AGE
demoapp ["istio-system/proxy-gateway"] ["demoapp","istio-demo.camera360.com"] 24s
# kubectl get gw -n istio-system
NAME AGE
kiali-gateway 2d1h
proxy-gateway 22s
测试
kiali结果
bashwhile true; do curl -L istio-demo.camera360.com/backend;sleep 0.$RANDOM;done
while true; do curl istio-demo.camera360.com/camera360;sleep 0.$RANDOM;done
while true; do curl istio-demo.camera360.com ;sleep 0.$RANDOM;done
VirtualService 定义了对特定目标服务的一组流量规则。如其名字所示,VirtualService 在形式上表示一个虚拟服务,将满足条件的流量都转发到对应的服务后端,这个服务后端可以是一个服务,也可以是在 DestinationRule 中定义的服务的子集。
vs是一组规则,将接入的流量定义好规则转发到后端,后端可以是服务,也可以是子集。
问题1:如何接入流量?也就是如何关联服务?
问题2:如果引入域名?
yaml# cat virtualservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp
http:
- name: weight-based-routing
route:
- destination:
host: demoapp
subset: v10
weight: 99
- destination:
host: demoapp
subset: v11
weight: 1
kiali结果
yaml# cat virtualservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp
http:
- name: canary
match:
- headers:
x-canary:
exact: "true"
route:
- destination:
host: demoapp
subset: v11
headers:
request:
set:
User-Agent: Chrome
response:
add:
x-canary: "true"
- name: default
headers:
response:
add:
X-Envoy: test
route:
- destination:
host: demoapp
subset: v10
bash# curl -H "x-canary: true" demoapp:8080/user-agent # 查看agent
# curl -H "x-canary: true" demoapp:8080/ #查看请求结果
yaml# cat virtualservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp
http:
- name: canary
match:
- uri:
prefix: /canary
rewrite:
uri: /
route:
- destination:
host: demoapp
subset: v11
fault:
abort: # 注入中断
percentage: #比例
value: 20 #20%
httpStatus: 555 # 返回555
- name: default
route:
- destination:
host: demoapp
subset: v10
fault:
delay: # 注入延迟
percentage: #20%
value: 20
fixedDelay: 3s #延迟3s
请求
bash# while true; do curl -L proxy/camera360;sleep 0.$RANDOM;done
# while true; do curl proxy ;sleep 0.$RANDOM;done
kiali结果
待完成
yaml# cat virtualservice-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: demoapp
spec:
hosts:
- demoapp
http:
- name: traffic-mirror
route:
- destination:
host: demoapp # 原集群
subset: v10
mirror:
host: demoapp # 将流量复制到新集群
subset: v11
bashwhile true; do curl -L proxy;sleep 0.$RANDOM;done
kiali结果
标记的这4个位置的流量应该是一样的。
DestinationRule有以下几个重要的属性:
属性 | 描述 | 是否为必选 |
---|---|---|
host | 表示规则的适用对象 | 必选 |
trafficPolicy | 定义的内容,包括负载均衡,连接池和异常点检测等 | 必选 |
subsets | 定义服务的子集 | 非必选 |
exportTo | 用于控制 DestinationRule 跨命名空间的可见性,如果未赋值,则默认全局可见。 | 非必选 |
流量策略(trafficPolicy)支持的重要配置
属性 | 描述 |
---|---|
LoadBalance | LoadBalancerSettings 类型,描述服务的负载均衡算法。 |
connectionPool | ConnectionPoolSettings 类型,描述服务的连接池配置。 |
outlierDetection | OutlierDetection,描述服务的异常点检查。 |
tls | 描述服务的 TLS 连接设置。 |
PortTrafficPolicy | portLevelSettings类型,支持针对端口的负载 |
simple(无需调整的标准负载平衡算法)支持的参数有:
Name | 描述 |
---|---|
ROUND_ROBIN | 循环策略。默认 |
LEAST_CONN | 最小请求负载均衡器使用 O(1) 算法,该算法选择两个随机健康主机并选择具有较少活动请求的主机。 |
RANDOM | 随机负载均衡器选择一个随机的健康主机。如果没有配置健康检查策略,随机负载均衡器的性能通常比轮询更好。 |
PASSTHROUGH | 直接转发连接到客户端连接的目标地址,既没有做负载均衡。 |
LEAST_CONN
yaml# cat loadBalancer-lc.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demoapp-lb
spec:
host: demoappv11
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
# 未找到有效的效果测试办法
参数 | 类型 | 描述 | 必选 |
---|---|---|---|
port | PortSelector | 指定应用此策略的目标服务上的端口号。 | 非必选 |
loadBalancer | LoadBalancerSettings | 控制负载平衡器算法的设置。 | 非必选 |
connectionPool | ConnectionPoolSettings | 控制与上游服务的连接量的设置 | 非必选 |
outlierDetection | OutlierDetection | 控制从负载平衡池中逐出不健康主机的设置 | 非必选 |
tls | TLSSettings | 与上游服务连接的 TLS 相关设置。 | 非必选 |
yaml# cat loadBalancer-port.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demoapp-lb-port
spec:
host: demoappv12
trafficPolicy: # 该策略适合所有端口
portLevelSettings:
- port:
number: 80
loadBalancer:
simple: LEAST_CONN
- port:
number: 9080
loadBalancer:
simple: ROUND_ROBIN
一致性哈希,只针对http有效,基于HTTP Header、Cookie 的取值来进行哈希。
负载均衡会把哈希一致性的请求转发到相同的后端实例,从而实现会画保持。
Name | 描述 |
---|---|
httpHeaderName | 基于Header的哈希 |
httpCookie | 基于cookie的哈希 |
userSourceIp | 基于源ip进行哈希 |
minimumRingSize | 哈希环上虚拟节点数的最小值,节点数越多则负载均衡越细致。如果后端实例数少于哈希环上的虚拟节点数,则每个后端实例都会有一个虚拟节点。 |
基于标头一致性hash绑定
yamlcat destinationrule-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demoapp
spec:
host: demoapp
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s
http:
http2MaxRequests: 1000
maxRequestsPerConnection: 10
subsets:
- name: v10
labels:
version: v1.0
trafficPolicy:
loadBalancer:
consistentHash: #一致性hash
httpHeaderName: X-User # 指定特定标头
- name: v11
labels:
version: v1.1
bash# 特殊标头会绑定到同一个pod
while true; do curl -H "X-User: test" demoapp:8080/hostname;sleep 0.$RANDOM;done
# 非特殊标头随机调度
while true; do curl -H "X" demoapp:8080/hostname;sleep 0.$RANDOM;done
连接池的作用是配置一个阈值来防止一个服务流量过大而影响整个集群
以下配置为,配置 TCP 连接池,最大连接数是 80,连接超时是 25 毫秒,并且配置了 TCP 的 keepalive 探测策略
yamltrafficPolicy:
connectionpool:
tcp:
macConnections: 80
connectTimeout: 25ms
tcpKeepalive:
probes: 5 # 表示有多少次探测没有应答就可以断定连接断开
time: 3600s # 表示在发送探测前连接空闲了多长时间
interval: 60s # 探测间隔
http连接池是基于七层的一个应用层协议的参数,可以更加细致。
以下配置为,HTTP 连接池设置一般和对应的 TCP 设置配合使用,如下配置就是在刚才的 TCP 连接池管理基础上增加对 HTTP 的连接池控制,为 服务配置最大 80 个连接,只允许最多有 800 个并发请求,每个连接的请求不超过 10 个,连接超时是 25 毫秒:
yamltrafficPolicy:
connectionpool:
tcp:
macConnections: 80
connectTimeout: 25ms
http:
http2MaxRequests: 800
maxRequestPerConnection: 10
限流
yaml# cat destinationrule-demoapp.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: demoapp
spec:
host: demoapp
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 60s
#simple: LEAST_CONN
connectionPool:
tcp:
maxConnections: 10
connectTimeout: 30ms #链接超时时长
tcpKeepalive:
time: 7000s
interval: 30s
http:
http1MaxPendingRequests: 1
http2MaxRequests: 1 #现在最大处理请求为1
maxRequestsPerConnection: 1
subsets:
- name: v10
labels:
version: v1.0
- name: v11
labels:
version: v1.1
两个客户端直接同时请求pode的svc才能看到效果,可以看到两个请求同时进来时,一个请求会卡住。
bash while true; do curl demoapp:8080 ;sleep 0.0$RANDOM;done
探测到后端服务不可用或者异常就会将实例标记为异常并且隔离,在一段时间内不为其分配流量。过一段时间后,被隔离的服务会解除隔离,尝试让其处理请求,如果还不正常就会再次隔离更长时间。
相关配置参数:
实例:检查 4 分钟内服务实例的访问异常情况,连续出现 5 次访问异常的实例将被隔离 10 分钟,被隔离的实例不超过 30%,在第 1 次隔离期满后,异常实例将被重新接受流量,如果仍不能正常工作,则会重新隔离,第 2 次将被隔离 20 分钟,以此类推。
yaml trafficPolicy:
connectionpool:
tcp:
macConnections: 80
connectTimeout: 25ms
http:
http2MaxRequests: 800
maxRequestPerConnection: 10
outlierDetection:
consecutiveErrors: 5
interval: 4m
baseEjectionTime: 10m
maxEjectionPercent: 30
yaml## 假设demoapp服务有10个实例,则以上配置的效果是:为demoapp服务配置最大80个连接,最大请求数为800,每个连接的请求数都不超过10个,连接超时是25毫秒;另外,在4分钟内若有某个demoapp服务实例连续出现5次访问异常,比如返回5xx错误,则该demoapp服务实例将被隔离10分钟,被隔离的总数不超过3个。在第1次隔离期满后,异常的实例将重新接受流量,如果实例工作扔不正常,则被重新隔离,第2次将被隔离20分钟,以此类推。
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: demoapp
spec:
host: demoapp
trafficPolicy:
connectionpool:
tcp:
macConnections: 80
connectTimeout: 25ms
http:
http2MaxRequests: 800
maxRequestsPerConnection: 10
outlierDetection:
consecutiveErrors: 5
interval: 4m
baseEjectionTime: 10m
maxEjectionPercent: 30
先创建证书,和正常的k8s创建方式一样
bash# 生成ssl证书
openssl req -out kiali.camera360.com.csr -newkey rsa:2048 -nodes -keyout kiali.camera360.com.key -subj "/CN=kiali.camera360.com/O=kiali organization"
openssl x509 -req -days 365 -CA kiali.camera360.com.crt -CAkey kiali.camera360.com.key -set_serial 1 -in kiali.camera360.com.csr -out kiali.camera360.com.crt
# 创建secret
kubectl create secret tls camera360.com --key=kiali.camera360.com.key --cert=kiali.camera360.com.crt -n istio-system
配置gw
yaml# cat kiali-gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: kiali-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "kiali.camera360.com"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: camera360.com
hosts:
- "kiali.camera360.com"
Kiali 提供以下功能:
前面展示的都是自动探测的服务拓扑图。
如果要更炫酷一点,可以把这些勾选上。
地址:tracing.camera360.com
istio的configmap备份
yaml
apiVersion: v1
data:
mesh: |-
accessLogFile: /dev/stdout
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
proxyMetadata: {}
tracing:
zipkin:
address: zipkin.istio-system:9411
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
meshNetworks: 'networks: {}'
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","data":{"mesh":"accessLogFile:
/dev/stdout\ndefaultConfig:\n discoveryAddress:
istiod.istio-system.svc:15012\n proxyMetadata: {}\n tracing:\n
zipkin:\n address: zipkin.istio-system:9411\nenablePrometheusMerge:
true\nrootNamespace: istio-system\ntrustDomain:
cluster.local","meshNetworks":"networks:
{}"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio.io/rev":"default","operator.istio.io/component":"Pilot","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.12.1","release":"istio"},"name":"istio","namespace":"istio-system"}}
creationTimestamp: '2022-02-18T06:30:15Z'
labels:
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio.io/rev: default
operator.istio.io/component: Pilot
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.12.1
release: istio
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
'f:data':
'f:mesh': {}
'f:meshNetworks': {}
'f:metadata':
'f:annotations':
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:labels':
'f:install.operator.istio.io/owning-resource': {}
'f:install.operator.istio.io/owning-resource-namespace': {}
'f:istio.io/rev': {}
'f:operator.istio.io/component': {}
'f:operator.istio.io/managed': {}
'f:operator.istio.io/version': {}
'f:release': {}
manager: istio-operator
operation: Apply
time: '2022-02-18T06:30:15Z'
name: istio
namespace: istio-system
resourceVersion: '1240311132'
uid: 7921734c-7abc-4ede-a0e6-2168a7614916
istio默认的日志位置,具体内容见上备份。
yaml# 将默的这行配置修改为
accessLogEncoding: JSON
accessLogFile: /dev/stdout
accessLogFormat: |-
{
"remote-ip": "%DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT%",
"authority": "%REQ(:AUTHORITY)%",
"bytes_received": "%BYTES_RECEIVED%",
"bytes_sent": "%BYTES_SENT%",
"downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
"downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
"duration": "%DURATION%",
"istio_policy_status": "%DYNAMIC_METADATA(istio.mixer:status)%",
"method": "%REQ(:METHOD)%",
"path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
"protocol": "%PROTOCOL%",
"request_id": "%REQ(X-REQUEST-ID)%",
"requested_server_name": "%REQUESTED_SERVER_NAME%",
"response_code": "%RESPONSE_CODE%",
"response_flags": "%RESPONSE_FLAGS%",
"route_name": "%ROUTE_NAME%",
"start_time": "%START_TIME%",
"upstream_cluster": "%UPSTREAM_CLUSTER%",
"upstream_host": "%UPSTREAM_HOST%",
"upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
"upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
"upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
"user_agent": "%REQ(USER-AGENT)%",
"referer": "%REQ(referer)%",
"x_forwarded_for": "%REQ(X-FORWARDED-FOR)%"
}
# 保存配置
重启istio-ingressgateway的deploy,注意这里只有一个副本,在服务不烦恼的情况下重启。
最后检查配置是否生效
将日志收集到sls,收集方式和之前的逻辑一样,修改配置。
yaml# 配置deploy变量或者直接编辑配置日志系统。
- name: aliyun_logs_istio-ingress
value: stdout
istio需要知道endpoint在哪里和属于哪个服务,为了定位到service registry,istio会连接到一个服务发现系统。 我们需要配置k8s的自定义资源(crds)来控制相关配置, istio配置主要由虚拟服务(vs)和目标规则(dr)来完成。istio主要的作用是生成规则,我们又称为控制平面。 envoy我们是集成到每一个pod上的sidecar,是具体来执行相关操作的一个边车代理,我们称为数据平面。 每一个虚拟服务都包含一组路由规则(根据svc生成),istio会按照顺序来评估他们,istio将给定的请求匹配到虚拟服务制定的实际目标地址(cluster)。
查看istio启动的pod的详情
yaml# istioctl x describe pod myapp10-5f94b9c956-lwtfx
Pod: myapp10-5f94b9c956-lwtfx # pod 名称
Pod Ports: 5000 (demoapp), 15090 (istio-proxy) #5000 是pod开放的端口,15090 是sidecar的端口
--------------------
Service: app # svc的名字
Port: http 80/HTTP targets pod port 8080
DestinationRule: myapp-sub for "app" #给app svc配置了一个叫myapp-sub的dr
Matching subsets: v10 #当前pod匹配到的子集
(Non-matching subsets v11,v12)
No Traffic Policy
Exposed on Ingress Gateway http://47.99.131.98
VirtualService: route-vs #当前pod匹配到的vs
/camera360* #匹配到的规则
1 additional destination(s) that will not reach this pod
本文作者:mykernel
本文链接:
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!