当前位置: 首页 > 工具软件 > Stolon > 使用案例 >

使用helm安装Stolon(Postgresql集群)实例

田远
2023-12-01

为了部署方便,我们这里使用helm进行安装

1. 镜像推送至私人harbor仓库,为了后面安装更加快速拉取镜像

docker login https://reg01.sky-mobi.com #登陆harbor
docker pull sorintlab/stolon:v0.16.0-pg10 #拉取公共仓库的到本地仓库
docker  tag  sorintlab/stolon:v0.16.0-pg10 reg01.sky-mobi.com/stolon/stolon:v0.16.0-pg10 #打标签
docker push reg01.sky-mobi.com/stolon/stolon:v0.16.0-pg10 #推送到自己的harbor仓库

2. 使用helm安装

helm fetch stable/stolon --untar   #下载至本地
kubectl create namespace yunwei-database #创建namespace
helm install postgresql stable/stolon -f values.yaml -n yunwei-database #这里values.yaml配置文件需要自己修改,后面我会把我的放上来
helm list -n yunwei-database
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
postgresql      yunwei-database 2               2020-06-05 14:51:08.823385211 +0800 CST deployed        stolon-1.5.8    0.13.0   

如果创建有问题需要重建,需要先删除
helm delete postgresql -n yunwei-database

3. 这里我使用的是ceph,需要创建secret,ceph-admin-secret.yaml为配置文件

cat ceph-admin-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-k8sadmin-secret
  namespace: yunwei-database
type: "kubernetes.io/rbd"
data:
  # ceph auth get-key client.admin | base64
  key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  
kubectl apply -f ceph-admin-secret.yaml #创建secret

4. 部署完成,查看状态

这里由于我使用helm upgrade重新配置过,所以会有一个update的pod.第一次安装没有更新的情况下,没有此pod.
#kubectl get pod -n yunwei-database
NAME                                          READY   STATUS      RESTARTS   AGE
postgresql-stolon-create-cluster-jwhg9        0/1     Completed   0          23h
postgresql-stolon-keeper-0                    1/1     Running     0          46m
postgresql-stolon-keeper-1                    1/1     Running     0          52m
postgresql-stolon-proxy-6c9dbcc8-hvqx7        1/1     Running     0          23h
postgresql-stolon-proxy-6c9dbcc8-v84cz        1/1     Running     0          23h
postgresql-stolon-sentinel-7d898946c4-bvtpz   1/1     Running     0          23h
postgresql-stolon-sentinel-7d898946c4-tlkl7   1/1     Running     0          23h
postgresql-stolon-update-cluster-spec-5sx25   0/1     Completed   0          52m

5. 我修改的values.yaml文件的几处地方,大家可以根据自己需求修改,具体配置可以查看helm stolon中Configuration段落,postgresql参数见postgresql.conf

修改为镜像仓库私人仓库,也就是之前push上去的仓库地址
image:
  repository: reg01.sky-mobi.com/stolon/stolon
  tag: v0.16.0-pg10

修改持久卷配置,这里我使用ceph,通过stroageclass动态创建
kubectl get  storageclass  #查看storageclass名称
NAME       PROVISIONER    AGE
ceph-k8s   ceph.com/rbd   80d
ceph-rbd   ceph.com/rbd   120d


persistence:
  enabled: true
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClassName: "ceph-k8s"
  accessModes:
    - ReadWriteOnce
  size: 200Gi

postgresql 参数设置,可以根据需求往里添加,可查阅自己需要设置的参数
pgParameters:
  max_connections: "1000"
  shared_buffers: "8192MB"
  maintenance_work_mem: "2048MB"
  listen_addresses: "*"

keeper是statefulset,所以我这里做了一些资源的限制
keeper:
  uid_prefix: "keeper"
  replicaCount: 2
  annotations: {}
  resources:
    requests:
      cpu: 8000m
      memory: 24000Mi
    limits:
      cpu: 16000m
      memory: 48000Mi

proxy是代理,是一个service,这里通过clusterIP固定了service的IP,这样做可以在proxy故障重启的时候,避免业务修改IP
proxy:
  replicaCount: 2
  annotations: {}
  resources: {}
  priorityClassName: ""
  service:
    type: ClusterIP
#    loadBalancerIP: ""
    annotations: {}
    ports:
      proxy:
        port: 5432
        targetPort: 5432
        protocol: TCP
    clusterIP: 10.109.5.21

6. Postgresql 故障切换测试,找一台有psql客户端的机器,我这里K8S内外网已经互通,所以数据库可以直接用物理服务器直连K8S内部pod的地址。

postgresql-stolon-keeper-0     10.254.99.24    备节点
postgresql-stolon-keeper-1     10.254.99.40    主节点

直接连从节点
 psql -h 10.254.99.24 -p 5432 postgres -U postgres     

 postgres=# select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 t
(1 row)

直接删除statefulset ,然后删除主节点keeper-1,这样就只剩下备节点postgresql-stolon-keeper-0
kubectl delete statefulset postgresql-stolon-keeper --cascade=false -n yunwei-database 
kubectl delete pod postgresql-stolon-keeper-1 -n yunwei-database

如下,可见已经切换为主节点
 psql -h 10.254.99.24 -p 5432 postgres -U postgres     

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 f
(1 row)
注意:如果按以上操作,不会再自动启动postgresql-stolon-keeper-1,只会发生failover,切换主节点到keeper-0.

7. 还有一种情况是不删除statefulset,直接删除keeper主节点

kubectl delete pod postgresql-stolon-keeper-1 -n yunwei-database
这种情况下,删除后,会自动启用一个keeper,保持我们之前设置的2个keeper.删除后,发生failover,另外一个keeper切换为主节点。

参考:
https://github.com/helm/charts/tree/master/stable/stolon
https://github.com/sorintlab/stolon/tree/master/examples/kubernetes

 类似资料: