生产集群升级kube-proxy组件计划
目的:1.13.7 --> 1.16.8
附件:点我下载kube-proxy-1.16.8.yaml
kubectl get daemonset kube-proxy --namespace kube-system -o yaml | grep 'resource-container='
如何有输出,edit删除
kubectl edit daemonset kube-proxy --namespace kube-system
kubectl -n kube-system get configmaps kube-proxy -o yaml >> old-kube-proxy-1.13.7-2021xxxx.yaml
kubectl -n kube-system get configmaps kube-proxy-config -o yaml >> old-kube-proxy-1.13.7-2021xxxx.yaml
kubectl -n kube-system get sa kube-proxy -o yaml >> old-kube-proxy-1.13.7-2021xxxx.yaml
kubectl -n kube-system get clusterrolebindings.rbac.authorization.k8s.io eks:kube-proxy -o yaml >> old-kube-proxy-1.13.7-2021xxxx.yaml
kubectl -n kube-system get daemonsets.apps kube-proxy -o yaml >> old-kube-proxy-1.13.7-2021xxxx.yaml
查看终端服务节点
aws eks describe-cluster \
--name <cluster-name> \
--region <region-code> \
--query 'cluster.endpoint' \
--output text
返回
https://<A89DBB2140C8AC0C2F920A36CCC6E18C>.sk1.<region-code>.eks.amazonaws.com
修改模板文件
kubectl apply -f kube-proxy-temp-1.16.8.yaml
验证方式:
kube-proxy pod running 后
ssh ec2-user@work01
iptables -L <KUBE-SVC-XXXX> -t nat
master节点操作:
kubectl -n <grafana> scale deployments.apps <grafana-dep> --replicas=5
work节点操作
iptables -L <KUBE-SVC-XXXX> -t nat
同步更新则为正常
恢复操作:
kubectl -n <grafana> scale deployments.apps <grafana-dep> --replicas=1
方式1(优先):
kubectl -n kube-system rollout undo daemonsets.apps kube-proxy
成功后ssh工作节点验证iptables规则
方式2:
old-kube-proxy-1.13.8.yaml(修改格式能直接应用的yaml文件)
kubectl apply -f old-kube-proxy-1.13.8.yaml
pod runing后再次执行验证操作
验证方式:
kube-proxy pod running 后
ssh ec2-user@work01
iptables -L <KUBE-SVC-XXXX> -t nat
master节点操作:
kubectl -n <grafana> scale deployments.apps <grafana-dep> --replicas=5
work节点操作
iptables -L <KUBE-SVC-XXXX> -t nat
同步更新则为正常
恢复操作:
kubectl -n <grafana> scale deployments.apps <grafana-dep> --replicas=1