kubeasz 项目在中控机执行/etc/ansible/23.backup.yml
ansible-playbook /etc/ansible/23.backup.yml
ETCDCTL_API=3 etcdctl --write-out=table snapshot status /etc/ansible/.cluster/backup/snapshot.db
kubectl -n kube-system delete deployments.apps kubernetes-dashboard
1>指定需要恢复的 etcd 数据备份,默认使用最近的一次备份
cat /etc/ansible/roles/cluster-restore/defaults/main.yml
2>通过etcd恢复
ansible-playbook /etc/ansible/24.restore.yml
3>确认备份是否恢复
kubectl -n kube-system get pod
4>测试结果:pod可以恢复正常,但是pod如果重启会一直Terminating,强制删除后无法自动创建新的pod,其他类型资源未测试,网络插件flannel异常(恢复方式不推荐)
sudo rm -rf /opt/kube/bin/kube-apiserver
systemctl restart kube-apiserver.service
1>卸载集群(如果不重启和删除残留容器恢复后pod状态一直是crash)
ansible-playbook /etc/ansible/99.clean.yml
docker ps -q -a|xargs docker rm -f
reboot
2>安装集群
/etc/ansible/tools/easzup -S
ansible-playbook /etc/ansible/90.setup.yml
3>恢复集群
docker exec -it kubeasz bash
ansible-playbook /etc/ansible/24.restore.yml
1>备份集群:
ansible-playbook /etc/ansible/23.backup.yml
2> 停止k8s服务
systemctl stop kube-apiserver.service
systemctl stop kube-controller-manager.service
systemctl stop kubelet.service
systemctl stop kube-proxy.service
systemctl stop kube-scheduler.service
3>下载二进制包:
wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz
\cp kubernetes/server/bin/kube* /etc/ansible/bin/
sudo rm -rf /etc/ansible/bin/kube*.tar
ansible-playbook -t upgrade_k8s /etc/ansible/22.upgrade.yml