当前位置: 首页 > 工具软件 > XtraDB > 使用案例 >

如何在Kubernetes中安装Mysql PXC(Percona XtraDB Cluster)集群

宇文鸿振
2023-12-01

percona官方为我们提供在kubernetes中部署PXC集群的最佳实践:Percona Kubernetes Operator for PXC,https://github.com/percona/percona-xtradb-cluster-operator

使用Operator安装PXC的方法如下:

1、克隆percona-xtradb-cluster-operator代码仓

$ git clone -b v1.9.0 https://github.com/percona/percona-xtradb-cluster-operator
$ cd percona-xtradb-cluster-operator

2、创建自定义资源类型

$ kubectl apply -f deploy/crd.yaml

3、创建pxc的命名空间,例如pxc(后续步骤创建的资源均在该命名空间下)

$ kubectl create namespace pxc

4、创建rbac(使用 RBAC 鉴权 | Kubernetes

$ kubectl apply -f deploy/rbac.yaml -n pxc

5、启动operator

$ kubectl apply -f deploy/operator.yaml -n pxc

这一步创建pod时如果遇到如下错误:

Failed to pull image "percona/percona-xtradb-cluster-operator:1.9.0": rpc error: code = Unknown desc = error pulling image configuration: received unexpected HTTP status: 500 Internal Server Error

可以通过修改operator.yaml中percona-xtradb-cluster-operator镜像的版本为1.8.0来解决:

...
  image: percona/percona-xtradb-cluster-operator:1.8.0
...

6、创建secret(Secret | Kubernetes),secret中保存了pxc operator的默认帐号密码,如需修改,在此文件中修改即可

$ kubectl create -f deploy/secrets.yaml -n pxc

7、选定运行mysql的工作节点,并为其打上标签(因为后续步骤中选择使用hostpath卷的方式将mysql数据文件保存在工作节点的文件系统上,为了避免pod重建后读取不到历史数据,因为需要将pxc相关pod绑定到特定的工作节点上),例如三节点的pxc集群选择worker1、worker2、worker3三个工作节点:

$ kubectl label node worker1 service_type=pxc
$ kubectl label node worker2 service_type=pxc
$ kubectl label node worker3 service_type=pxc

注意:此处标签的值绝不可以设置为true或者yes等保留关键字,否则第9步的 PerconaXtraDBCluster类型的资源无法识别。

8、在 worker1、worker2、worker3 节点上相应创建目录,用于存储pxc数据文件。在三个节点上均执行:

$ sudo mkdir /data/pxc
$ sudo chown 99:1001 /data/pxc
$ sudo chmod 775 /data/pxc

9、创建Percona XtraDB Cluster 相关组件

根据生产需要修改deploy/cr.yaml,例如如下的yaml文件禁用了pmm、proxysql部分,使用haproxy代理pxc,使用hostpath保存数据文件,使用pvc保存备份文件(此处仅需声明pv,程序会自动为每一个备份创建一个pvc),并根据需要修改了镜像版本:

apiVersion: pxc.percona.com/v1-9-0
kind: PerconaXtraDBCluster
metadata:
  name: cluster1
  finalizers:
    - delete-pxc-pods-in-order
spec:
  crVersion: 1.9.0
  secretsName: my-cluster-secrets
  vaultSecretName: keyring-secret-vault
  sslSecretName: my-cluster-ssl
  sslInternalSecretName: my-cluster-ssl-internal
  logCollectorSecretName: my-log-collector-secrets
  allowUnsafeConfigurations: false
  updateStrategy: Never
  upgradeOptions:
    versionServiceEndpoint: https://check.percona.com
    apply: 8.0-recommended
    schedule: "0 4 * * *"
# use pxc 5.7.33
  pxc:
    size: 3
#    image: percona/percona-xtradb-cluster:8.0.23-14.1
    image: percona/percona-xtradb-cluster:5.7.33-31.49
    autoRecovery: true
    resources:
      requests:
        memory: 1G
        cpu: 600m
    nodeSelector:
      service_type: pxc
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    volumeSpec:
#      emptyDir: {}
      hostPath:
        path: /data/pxc
        type: Directory
#      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
#        resources:
#          requests:
#            storage: 2G
    gracePeriod: 600
# use haproxy
  haproxy:
    enabled: true
    size: 3
#    image: percona/percona-xtradb-cluster-operator:1.9.0-haproxy
    image: percona/percona-xtradb-cluster-operator:1.8.0-haproxy
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
# disable proxysql
  proxysql:
    enabled: false
    size: 3
    image: percona/percona-xtradb-cluster-operator:1.9.0-proxysql
    resources:
      requests:
        memory: 1G
        cpu: 600m
    affinity:
      antiAffinityTopologyKey: "kubernetes.io/hostname"
    volumeSpec:
#      emptyDir: {}
#      hostPath:
#        path: /data
#        type: Directory
      persistentVolumeClaim:
#        storageClassName: standard
#        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 2G
    podDisruptionBudget:
      maxUnavailable: 1
    gracePeriod: 30
# logcollector
  logcollector:
    enabled: true
#    image: percona/percona-xtradb-cluster-operator:1.9.0-logcollector
    image: percona/percona-xtradb-cluster-operator:1.8.0-logcollector
# disable pmm
  pmm:
    enabled: false
    image: percona/pmm-client:2.18.0
    serverHost: monitoring-service
    serverUser: admin
# disable pitr,Point-in-time recovery is supported by the Operator only with Percona XtraDB Cluster versions starting from 8.0.21-12.1
  backup:
#    image: percona/percona-xtradb-cluster-operator:1.9.0-pxc8.0-backup
    image: percona/percona-xtradb-cluster-operator:1.9.0-pxc5.7-backup
    pitr:
      enabled: false
      storageName: STORAGE-NAME-HERE
      timeBetweenUploads: 60
    storages:
#      s3-us-west:
#        type: s3
#        s3:
#          bucket: S3-BACKUP-BUCKET-NAME-HERE
#          credentialsSecret: my-cluster-name-backup-s3
#          region: us-west-2
      fs-pvc:
        type: filesystem
        volume:
          persistentVolumeClaim:
            storageClassName: rook-cephfs
            accessModes: [ "ReadWriteMany" ]
            resources:
              requests:
                storage: 6G
    schedule:
      - name: "sat-night-backup"
        schedule: "0 0 * * 6"
        keep: 3
#        storageName: s3-us-west
        storageName: fs-pvc
      - name: "daily-backup"
        schedule: "0 12,13,14 * * *"
        keep: 5
        storageName: fs-pvc

创建资源:

$ kubectl apply -f deploy/cr.yaml -n pxc

yaml文件释义:

  • updateStrategy: Never ,关闭自动更新
  • antiAffinityTopologyKey: "kubernetes.io/hostname",此参数将禁止pods运行在同一台主机上

此步骤可能会遇到两个问题:

(1):镜像拉取失败

error pulling image configuration: received unexpected HTTP status: 500 Internal Server Error

解决方法同第5步,目前部分1.9.0版本的镜像无法正常拉取,可修改镜像版本为1.8.0,即可正常拉取。

(2):文件夹权限问题导致pxc-int容器初始化失败

$ kubectl logs cluster1-pxc-0 -c pxc-init -n pxc
++ id -u
++ id -g
+ install -o 99 -g 99 -m 0755 -D /pxc-entrypoint.sh /var/lib/mysql/pxc-entrypoint.sh
install: cannot create regular file '/var/lib/mysql/pxc-entrypoint.sh': Permission denied

解决办法参考第8步,正确设置hostpath目录权限即可。

创建完成的pod如下:

$ kubectl get po -n pxc
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-haproxy-0                                 2/2     Running   0          75m
cluster1-haproxy-1                                 2/2     Running   0          73m
cluster1-haproxy-2                                 2/2     Running   0          73m
cluster1-pxc-0                                     3/3     Running   0          70m
cluster1-pxc-1                                     3/3     Running   0          74m
cluster1-pxc-2                                     3/3     Running   0          72m
percona-xtradb-cluster-operator-5bdd76bc45-jxkgl   1/1     Running   0          16h

service如下:

$ kubectl get svc -n pxc  
NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                 AGE
cluster1-haproxy                  ClusterIP   10.96.191.113   <none>        3306/TCP,3309/TCP,33062/TCP,33060/TCP   78m
cluster1-haproxy-replicas         ClusterIP   10.96.169.4     <none>        3306/TCP                                78m
cluster1-pxc                      ClusterIP   None            <none>        3306/TCP,33062/TCP,33060/TCP            78m
cluster1-pxc-unready              ClusterIP   None            <none>        3306/TCP,33062/TCP,33060/TCP            78m
percona-xtradb-cluster-operator   ClusterIP   10.96.106.19    <none>        443/TCP                                 16h

10、连接测试

$ kubectl run -i --rm --tty -n pxc percona-client --image=percona:8.0 --restart=Never -- bash -il
[mysql@percona-client /]$ mysql -h cluster1-haproxy -uroot -proot_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8368
Server version: 5.7.33-36-57 Percona XtraDB Cluster (GPL), Release rel36, Revision a1ed9c3, WSREP version 31.49, wsrep_31.49

Copyright (c) 2009-2021 Percona LLC and/or its affiliates
Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

几个问题

(1) 如何在其他namespace中访问pxc?

可以通过FQDN访问pxc,例如,本示例中的FQDN为cluster1-haproxy.pxc.svc.cluster.local 或者 cluster1-haproxy-replicas.pxc.svc.cluster.local

(2) 两个服务cluster1-haproxy和cluster1-haproxy-replicas有什么区别?

cluster1-haproxy服务默认连接cluster1-pxc-0,如果cluster1-pxc-0不可用,则按降序依次选择其余节点,例如cluster1-pxc-2,cluster1-pxc-1。

cluster1-haproxy-replicas则采用轮询方法,依次请求各个节点。

(3) 如何进入pxc容器内部?

每个cluster1-pxc pod包含一个pxc容器,为数据库所在容器,可直接进入操作数据库(不建议):

$ kubectl exec -it cluster1-pxc-0 -n pxc --container pxc -- /bin/bash
bash-4.4$ mysql -uroot -proot_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10728
Server version: 5.7.33-36-57 Percona XtraDB Cluster (GPL), Release rel36, Revision a1ed9c3, WSREP version 31.49, wsrep_31.49

Copyright (c) 2009-2021 Percona LLC and/or its affiliates
Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

(4) 如何查看数据库root密码?

数据库密码以base64编码保存在secret中,可通入如下方法查看root密码:

$ kubectl get secret -n pxc
NAME                                          TYPE                                  DATA   AGE
default-token-fch4c                           kubernetes.io/service-account-token   3      16h
internal-cluster1                             Opaque                                8      96m
my-cluster-secrets                            Opaque                                8      16h
my-cluster-ssl                                kubernetes.io/tls                     3      96m
my-cluster-ssl-internal                       kubernetes.io/tls                     3      96m
percona-xtradb-cluster-operator-token-cktgk   kubernetes.io/service-account-token   3      16h

$ kubectl get secret my-cluster-secrets -n pxc -o yaml 
apiVersion: v1
data:
  clustercheck: Y2x1c3RlcmNoZWNrcGFzc3dvcmQ=
  monitor: bW9uaXRvcnk=
  operator: b3BlcmF0b3JhZG1pbg==
  pmmserver: YWRtaW4=
  proxyadmin: YWRtaW5fcGFzc3dvcmQ=
  replication: cmVwbF9wYXNzd29yZA==
  root: cm9vdF9wYXNzd29yZA==
  xtrabackup: YmFja3VwX3Bhc3N3b3Jk
kind: Secret
metadata:
  creationTimestamp: "2021-08-19T09:58:34Z"
  name: my-cluster-secrets
  namespace: pxc
  resourceVersion: "2432784"
  uid: bef2dec9-4f11-494a-ad06-925b579fa418
type: Opaque

得到数据库root密码的base64编码为cm9vdF9wYXNzd29yZA==,解码即可:

$ echo 'cm9vdF9wYXNzd29yZA==' | base64 -d
root_password

(5) 如何导入已有数据库到pxc中?

暂时只想到了一个方法。在上述安装过程中,可以发现,hostpath卷挂载在容器的/var/lib/mysql。所以可以将从别的数据库导出的.sql文件放在节点的/data/pxc/目录下,然后按第三步进入到容器内部,执行导入命令。

(6) 使用过程中发现percona的一些自定义资源无法正确删除?

可以通过编辑资源的yaml文件,将metadata的finalizers字段置空来强制删除,例如:

kubectl patch crd/perconaxtradbclusters.pxc.percona.com -p '{"metadata":{"finalizers":[]}}' --type=merge

(7) 如何查看备份文件?

$ kubectl get pxc-backup -n pxc
NAME                                           CLUSTER    STORAGE   DESTINATION                                     STATUS      COMPLETED   AGE
cron-cluster1-fs-pvc-20218207400-1nhng         cluster1   fs-pvc    pvc/xb-cron-cluster1-fs-pvc-20218207400-1nhng   Succeeded   39m         40m

备份文件保存在DESTINATION列的pvc中

遗留问题

pod cluster1-haproxy-0容器haproxy报错:

(combined from similar events): Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "aca9b2cfff42ad953fa1812a4608c8962d9baa88307b60cf1a07384f533778fa": OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/usr/local/bin/liveness-check.sh": stat /usr/local/bin/liveness-check.sh: no such file or directory: unknown

原因是镜像percona/percona-xtradb-cluster-operator:1.8.0-haproxy的/usr/local/bin目录下缺失liveness-check.sh,1.9.0版本镜像暂时无法拉取,以后再验证一下新版本镜像有无此问题。

 类似资料: