This project is no longer actively developed or maintained. The project exists here for historical reference. If you are interested in the future of the project and taking over stewardship, please contact etcd-dev@googlegroups.com.
The etcd operator manages etcd clusters deployed to Kubernetes and automates tasks related to operating an etcd cluster.
There are more spec examples on setting up clusters with different configurations
Read Best Practices for more information on how to better use etcd operator.
Read RBAC docs for how to setup RBAC rules for etcd operator if RBAC is in place.
Read Developer Guide for setting up a development environment if you want to contribute.
See the Resources and Labels doc for an overview of the resources created by the etcd-operator.
See instructions on how to install/uninstall etcd operator .
$ kubectl create -f example/example-etcd-cluster.yaml
A 3 member etcd cluster will be created.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-etcd-cluster-gxkmr9ql7z 1/1 Running 0 1m
example-etcd-cluster-m6g62x6mwc 1/1 Running 0 1m
example-etcd-cluster-rqk62l46kw 1/1 Running 0 1m
See client service for how to access etcd clusters created by the operator.
If you are working with minikube locally, create a nodePort service and test that etcd is responding:
$ kubectl create -f example/example-etcd-cluster-nodeport-service.json
$ export ETCDCTL_API=3
$ export ETCDCTL_ENDPOINTS=$(minikube service example-etcd-cluster-client-service --url)
$ etcdctl put foo bar
Destroy the etcd cluster:
$ kubectl delete -f example/example-etcd-cluster.yaml
Create an etcd cluster:
$ kubectl apply -f example/example-etcd-cluster.yaml
In example/example-etcd-cluster.yaml
the initial cluster size is 3.Modify the file and change size
from 3 to 5.
$ cat example/example-etcd-cluster.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "example-etcd-cluster"
spec:
size: 5
version: "3.2.13"
Apply the size change to the cluster CR:
$ kubectl apply -f example/example-etcd-cluster.yaml
The etcd cluster will scale to 5 members (5 pods):
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-etcd-cluster-cl2gpqsmsw 1/1 Running 0 5m
example-etcd-cluster-cx2t6v8w78 1/1 Running 0 5m
example-etcd-cluster-gxkmr9ql7z 1/1 Running 0 7m
example-etcd-cluster-m6g62x6mwc 1/1 Running 0 7m
example-etcd-cluster-rqk62l46kw 1/1 Running 0 7m
Similarly we can decrease the size of the cluster from 5 back to 3 by changing the size field again and reapplying the change.
$ cat example/example-etcd-cluster.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "example-etcd-cluster"
spec:
size: 3
version: "3.2.13"
$ kubectl apply -f example/example-etcd-cluster.yaml
We should see that etcd cluster will eventually reduce to 3 pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-etcd-cluster-cl2gpqsmsw 1/1 Running 0 6m
example-etcd-cluster-gxkmr9ql7z 1/1 Running 0 8m
example-etcd-cluster-rqk62l46kw 1/1 Running 0 9mp
If the minority of etcd members crash, the etcd operator will automatically recover the failure.Let's walk through this in the following steps.
Create an etcd cluster:
$ kubectl create -f example/example-etcd-cluster.yaml
Wait until all three members are up. Simulate a member failure by deleting a pod:
$ kubectl delete pod example-etcd-cluster-cl2gpqsmsw --now
The etcd operator will recover the failure by creating a new pod example-etcd-cluster-n4h66wtjrg
:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-etcd-cluster-gxkmr9ql7z 1/1 Running 0 10m
example-etcd-cluster-n4h66wtjrg 1/1 Running 0 26s
example-etcd-cluster-rqk62l46kw 1/1 Running 0 10m
Destroy etcd cluster:
$ kubectl delete -f example/example-etcd-cluster.yaml
Let's walk through operator recovery in the following steps.
Create an etcd cluster:
$ kubectl create -f example/example-etcd-cluster.yaml
Wait until all three members are up. Then stop the etcd operator and delete one of the etcd pods:
$ kubectl delete -f example/deployment.yaml
deployment "etcd-operator" deleted
$ kubectl delete pod example-etcd-cluster-8gttjl679c --now
pod "example-etcd-cluster-8gttjl679c" deleted
Next restart the etcd operator. It should recover itself and the etcd clusters it manages.
$ kubectl create -f example/deployment.yaml
deployment "etcd-operator" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-etcd-cluster-m8gk76l4ns 1/1 Running 0 3m
example-etcd-cluster-q6mff85hml 1/1 Running 0 3m
example-etcd-cluster-xnfvm7lg66 1/1 Running 0 11s
Create and have the following yaml file ready:
$ cat upgrade-example.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "example-etcd-cluster"
spec:
size: 3
version: "3.1.10"
repository: "quay.io/coreos/etcd"
Create an etcd cluster with the version specified (3.1.10) in the yaml file:
$ kubectl apply -f upgrade-example.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-etcd-cluster-795649v9kq 1/1 Running 1 3m
example-etcd-cluster-jtp447ggnq 1/1 Running 1 4m
example-etcd-cluster-psw7sf2hhr 1/1 Running 1 4m
The container image version should be 3.1.10:
$ kubectl get pod example-etcd-cluster-795649v9kq -o yaml | grep "image:" | uniq
image: quay.io/coreos/etcd:v3.1.10
Now modify the file upgrade-example
and change the version
from 3.1.10 to 3.2.13:
$ cat upgrade-example
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "example-etcd-cluster"
spec:
size: 3
version: "3.2.13"
Apply the version change to the cluster CR:
$ kubectl apply -f upgrade-example
Wait ~30 seconds. The container image version should be updated to v3.2.13:
$ kubectl get pod example-etcd-cluster-795649v9kq -o yaml | grep "image:" | uniq
image: gcr.io/etcd-development/etcd:v3.2.13
Check the other two pods and you should see the same result.
Note: The provided etcd backup/restore operators are example implementations.
Follow the etcd backup operator walkthrough to backup an etcd cluster.
Follow the etcd restore operator walkthrough to restore an etcd cluster on Kubernetes from backup.
什么是operator? 在 Operator 里,你提交的 API 对象不再是一个单体应用的描述,而是一个完整的分布式应用集群的描述。这里的区别在于,整个分布式应用集群的状态和定义,都成了Kubernetes 控制器需要保证的“终态”。比如,这个应用有几个实例,实例间的关系如何处理,实例需要把数据存储在哪里,如何对实例数据进行备份和恢复,都是这个控制器需要根据 API 对象的变化进行处理的逻
前言 整个k8s诸多组件几乎都是无状态的,所有的数据保存在etcd里,可以说etcd是整个k8s集群的数据库。可想而知,etcd的重要性。因而做好etcd数据备份工作至关重要。这篇主要讲一下我司的相关的实践。 备份etcd数据到s3 能做etcd的备份方案很多,但是大同小异,基本上都是利用了etcdctl命令来完成。 为什么选择s3那? 因为我们单位对于aws使用比较多,另外我们希望我们备份到一个
要求: prometheus-operator的prometheus的生产环境要持久化 1、把相关的证书拷贝到prometheus中的prometheus目录下 kubectl cp /etc/etcd/ssl/etcd.pem -n monitoring prometheus-prometheus-kube-prometheus-prometheus-0:/prometheus/ -c prom
etcd 是一个高可用的 Key/Value 存储系统,主要用于分享配置和服务发现。etcd 的灵感来自于 ZooKeeper 和 Doozer,侧重于: 简单:支持 curl 方式的用户 API (HTTP+JSON) 安全:可选 SSL 客户端证书认证 快速:单实例可达每秒 10000 次写操作 可靠:使用 Raft 实现分布式 Etcd is written in Go and uses t
discoverd 是基于 etcd 的自动发布、自动升级系统: 可以按照需要对服务进行分类,为不同版本的服务配置不同的启动脚本; 可以方便地与VMware, Docker和Vagrant等虚拟化平台和工具配合使用; 监控服务的运行状态,自动重启;
Etcd是Kubernetes集群中的一个十分重要的组件,用于保存集群所有的网络配置和对象的状态信息。在后面具体的安装环境中,我们安装的etcd的版本是v3.1.5,整个kubernetes系统中一共有两个服务需要用到etcd用来协同和存储配置,分别是: 网络插件flannel、对于其它网络插件也需要用到etcd存储网络的配置信息 kubernetes本身,包括各种对象的状态和元信息配置 注意:f
etcd 是 CoreOS 团队发起的一个管理配置信息和服务发现(Service Discovery)的项目,在这一章里面,我们将基于 etcd 3.x 版本介绍该项目的目标,安装和使用,以及实现的技术。
环境描述 flannel以etcd为存储,直接使用二进制文件部署,不走cni模式的k8s集群 k8s 集群版本v1.18 CIS版本 v1.14 或 v2.1.1 结合方案 由于flannel直接以二进制安装,且以etcd为存储,因此flanneld不会给node patch 相关的flannel注解,而CIS需要读取这些annotations来了解节点的public ip, vtepmac信息来
注: 内容翻译自 etcd gateway etcd 网关是什么 etcd 网关是一个简单的 TCP 代理,转发网络数据到 etcd 集群。网关是无状态和透明的;它既不检查客户端请求也不干涉集群应答。 网关支持多 etcd 服务器终端。当网关启动时,它随机的选择一个 etcd 服务器终端并转发所有请求到这个终端。这个终端服务锁偶请求直到网关发现一个网络失败。如果网关检测到终端失败,它将切换到其他的