当前位置: 首页 > 工具软件 > cri-o > 使用案例 >

AlmaLinux上 (Rocky)基于kubevip+cri-o+Calico用kubeadm搭建1.24版本多master高可用集群

郭博涉
2023-12-01

前言:本文是k8s搭建系列的第四篇,前三篇分别为
1.Centos7.9 + haproxy + keepalived + docker + flannel + kubeadm +1.23.6
https://blog.csdn.net/lic95/article/details/124903648?spm=1001.2014.3001.5501

2.AlmaLinux + haproxy + keepalived + containd + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125018220?spm=1001.2014.3001.5501

3.AlmaLinux + haproxy + keepalived + cri-o + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125025782?spm=1001.2014.3001.5501

基本涵盖了k8s搭建的所有情况,有其它需求请参考以上文档

一、部署节点说明

系统主机名IP地址节点软件
虚拟负载master192.168.3.30
AlmaLinux release 8.6master01192.168.3.31kube-vip、etcd、apiserver、scheduler、-controller-manager、calico
AlmaLinux release 8.6master02192.168.3.32kube-vip、etcd、apiserver、scheduler、-controller-manager、calico
AlmaLinux release 8.6master03192.168.3.33kube-vip、etcd、apiserver、scheduler、-controller-manager、calico
AlmaLinux release 8.6node01192.168.3.41kube-proxy、calico、kubelet
AlmaLinux release 8.6node02192.168.3.42kube-proxy、calico、kubelet
AlmaLinux release 8.6node03192.168.3.43kube-proxy、calico、kubelet
AlmaLinux release 8.6node04192.168.3.44kube-proxy、calico、kubelet
AlmaLinux release 8.6node05192.168.3.45kube-proxy、calico、kubelet

二、初始化各个节点为模板机
1.配置模板机基本配置、主机地址、时间同步、ipvsadm、内核参数

#安装工具软件
yum install vim net-tools wget lsof ipset telnet iproute-tc python3 -y

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
setenforce 0

#关闭swap
swapoff -a
sed -i 's/^.*almalinux-swap/#&/g' /etc/fstab

#配置主机地址解析
if [ -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`" ]; then
cat << EOF >> /etc/hosts
192.168.3.30 master
192.168.3.31 master01
192.168.3.32 master02
192.168.3.33 master03
192.168.3.40 cephadm
192.168.3.41 node01
192.168.3.42 node02
192.168.3.43 node03
192.168.3.44 node04
192.168.3.45 node05
EOF
fi

#根据IP地址获取主机名并写入hostname
echo `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'` >/etc/hostname

#重新登录终端立即生效
hostnamectl set-hostname `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`

# 立即生效
sysctl --system

#配置集群时间同步
yum install -y chrony

#master节点:
if [ ! -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | grep master `" ]; then
cat > /etc/chrony.conf << EOF
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
server ntp1.aliyun.com iburst
local stratum 10
allow 192.168.3.0/24
EOF
systemctl restart chronyd
systemctl enable chronyd
fi

#node节点
if [ ! -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | grep node `" ]; then
cat > /etc/chrony.conf << EOF
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
server 192.168.3.31 iburst
server 192.168.3.32 iburst
server 192.168.3.33 iburst
EOF
systemctl restart chronyd
systemctl enable chronyd
fi
#查看同步状态:
chronyc sources -v

# 安装ipvsadm,开启 ipvs,不开启 ipvs 将会使用 iptables,但是效率低,所以官网推荐需要开通 ipvs 内核
yum install ipvsadm ipset sysstat conntrack libseccomp -y

cat > /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
systemctl restart systemd-modules-load.service

# 激活 br_netfilter 模块
cat << EOF > /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
systemctl restart systemd-modules-load.service


# 内核参数设置:开启IP转发,允许iptables对bridge的数据进行处理
cat << EOF > /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 立即生效
sysctl --system

2.在所有节点安装配置cri-o

VERSION=1.24
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo \
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo \
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo

yum install cri-o podman podman-docker -y
systemctl daemon-reload
systemctl start crio
systemctl enable crio

3.在所有节点安装安装 kubeadm 和相关工具

#由于官方源位于国外,这里配置centos7 kubernetes国内阿里源
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安装 kubeadm 工具
yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1

#修改kubelet配置
cat <<EOF >/etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m"
EOF

systemctl daemon-reload 
systemctl enable kubelet
systemctl start kubelet

二、高可用,在master01、master02、master03部署master节点

1.生成kube-vip配置

#参考:https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing

#在mast01上生成kube-vip配置,并修改网卡名、VIP地址及镜像地址和版本
cat > /etc/kubernetes/manifests/kube-vip.yaml <<EOF 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  name: kube-vip
  namespace: kube-system
spec:
  containers:
  - args:
    - manager
    env:
    - name: vip_arp
      value: "true"
    - name: port
      value: "6443"
    - name: vip_interface
      value: ens160              #修改为实际网卡名称
    - name: vip_cidr
      value: "32"
    - name: cp_enable
      value: "true"
    - name: cp_namespace
      value: kube-system
    - name: vip_ddns
      value: "false"
    - name: vip_leaderelection
      value: "true"
    - name: vip_leaseduration
      value: "5"
    - name: vip_renewdeadline
      value: "3"
    - name: vip_retryperiod
      value: "1"
    - name: vip_address
      value: 192.168.3.30     #修改为VIP地址
    image: docker.io/plndr/kube-vip:v0.4.4  #修改为docker.io地址及最新稳定版本
    imagePullPolicy: Always
    name: kube-vip
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        - SYS_TIME
    volumeMounts:
    - mountPath: /etc/kubernetes/admin.conf
      name: kubeconfig
  hostAliases:
  - hostnames:
    - kubernetes
    ip: 127.0.0.1
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/admin.conf
    name: kubeconfig
status: {}
EOF

2.修复cri-o问题,手动下载pause 3.6镜像并改名,修复下一步无法科学上网的问题,如果会科学上网,可以忽略这一步

podman pull registry.aliyuncs.com/google_containers/pause:3.6
podman tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6

3.初始化 kubernetes master01 节点

kubeadm init \
    --image-repository=registry.aliyuncs.com/google_containers  \
    --kubernetes-version v1.24.1 \
    --service-cidr=172.18.0.0/16      \
    --pod-network-cidr=10.244.0.0/16 \
    --control-plane-endpoint=192.168.3.30:6443 \
    --cri-socket=/var/run/crio/crio.sock \
    --upload-certs \
    --v=5

选项说明:
  --image-repository:选择用于拉取镜像的镜像仓库(默认为“k8s.gcr.io” )
  --kubernetes-version:选择特定的Kubernetes版本(默认为“stable-1”)
  --service-cidr:为服务的VIP指定使用的IP地址范围(默认为“10.96.0.0/12”)
  --pod-network-cidr:指定Pod网络的IP地址范围。如果设置,则将自动为每个节点分配CIDR。
  --cri-socket:指定cri为cri-o
 

3.输出内容,可以看到初始化成功的信息和一些提示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d \
        --control-plane --certificate-key 72b3c0796cdf595bce9f060edfd3742830d3062f35c9f41166d91698bc29b260

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d 

4.根据上面提示内容执行如下操作

# 要开始使用集群,您需要以常规用户身份运行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 或者,如果您是root用户,则可以运行允许命令
export KUBECONFIG=/etc/kubernetes/admin.conf

# 加入.bashrc,方便以后连接服务器自动执行
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/root/.bashrc

5.加入master02、master03节点到master

# 修复cri-o问题,手动下载pause 3.6镜像并改名,修复下一步无法科学上网的问题,如果会科学上网,可以忽略这一步
podman pull registry.aliyuncs.com/google_containers/pause:3.6
podman tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.

# [root@master02 ~]# 
kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d \
        --control-plane --certificate-key 72b3c0796cdf595bce9f060edfd3742830d3062f35c9f41166d91698bc29b260

# [root@master03 ~]# 
kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d \
        --control-plane --certificate-key 72b3c0796cdf595bce9f060edfd3742830d3062f35c9f41166d91698bc29b260

# 在master02、master03上根据上面提示内容执行如下操作        
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/root/.bashrc

6.在master02和master03上静态部署kube-vip

[root@master01 ~]# scp /etc/kubernetes/manifests/kube-vip.yaml master02:/etc/kubernetes/manifests/
[root@master01 ~]# scp /etc/kubernetes/manifests/kube-vip.yaml master03:/etc/kubernetes/manifests/

#查看kube-vip启动情况
[root@master01 ~]# kubectl get pods -A | grep kube-vip
kube-system   kube-vip-master01                  1/1     Running   1 (4m53s ago)   6m55s
kube-system   kube-vip-master02                  1/1     Running   0               3m12s
kube-system   kube-vip-master03                  1/1     Running   0               11s
[root@master01 ~]# 

#三个节点均正常启动,高可用配置完毕

三、安装Calico网络插件

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

#安装完毕后新pod状态
[root@master01 ~]# kubectl get pods -A 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-56cdb7c587-m6jnd   1/1     Running   0             17m
kube-system   calico-node-29r8k                          1/1     Running   0             17m
kube-system   calico-node-n66mx                          1/1     Running   1             17m
kube-system   calico-node-srxq6                          1/1     Running   0             17m
kube-system   coredns-74586cf9b6-gz6bb                   1/1     Running   0             26m
kube-system   coredns-74586cf9b6-r4v49                   1/1     Running   0             26m
kube-system   etcd-master01                              1/1     Running   0             26m
kube-system   etcd-master02                              1/1     Running   0             24m
kube-system   etcd-master03                              1/1     Running   1             20m
kube-system   kube-apiserver-master01                    1/1     Running   0             26m
kube-system   kube-apiserver-master02                    1/1     Running   0             24m
kube-system   kube-apiserver-master03                    1/1     Running   1             20m
kube-system   kube-controller-manager-master01           1/1     Running   1 (24m ago)   26m
kube-system   kube-controller-manager-master02           1/1     Running   0             24m
kube-system   kube-controller-manager-master03           1/1     Running   1             20m
kube-system   kube-proxy-45lz4                           1/1     Running   1             20m
kube-system   kube-proxy-77pzh                           1/1     Running   0             24m
kube-system   kube-proxy-zg89b                           1/1     Running   0             26m
kube-system   kube-scheduler-master01                    1/1     Running   1 (24m ago)   26m
kube-system   kube-scheduler-master02                    1/1     Running   0             24m
kube-system   kube-scheduler-master03                    1/1     Running   1             20m
kube-system   kube-vip-master01                          1/1     Running   1 (24m ago)   26m
kube-system   kube-vip-master02                          1/1     Running   0             22m
kube-system   kube-vip-master03                          1/1     Running   1             19m

四、添加 5个Node节点到集群

# 在master01上获取添加方式
[root@master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d 
[root@master01 ~]# 

# 在node01、node02、node03、node04、node05上分别执行添加命令

#在任一台master上验证
kubectl get nodes
kubectl get nodes -o wide
kubectl get pods --all-namespaces

#直到全部变为Ready
[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
master01   Ready    control-plane   20m   v1.24.1
master02   Ready    control-plane   17m   v1.24.1
master03   Ready    control-plane   17m   v1.24.1
node01     Ready    <none>          71s   v1.24.1
node02     Ready    <none>          67s   v1.24.1
node03     Ready    <none>          64s   v1.24.1
node04     Ready    <none>          62s   v1.24.1
node05     Ready    <none>          59s   v1.24.1
[root@master01 ~]# 

五、部署dashboard
  Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群本身及其附属资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
   安装dashboard:(https://github.com/kubernetes/dashboard)

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

#耐心等待状态变为Running
kubectl get pods -n kubernetes-dashboard


# 修改对外暴露端口
[root@master01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
将 type: ClusterIP 修改为 type: NodePort 即可


# 获取对外暴露端口
[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   172.18.175.186   <none>        8000/TCP        82s
kubernetes-dashboard        NodePort    172.18.218.143   <none>        443:30681/TCP   82s
[root@master01 ~]# 

使用浏览器访问:
https://192.168.3.30:30681/#/login

六、整个机器配置完毕

部署nginx服务验证集群
[root@master01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master01 ~]#

[root@master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master01 ~]# 

[root@master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.18.0.1      <none>        443/TCP        24m
nginx        NodePort    172.18.76.241   <none>        80:30111/TCP   23s
[root@master01 ~]# 

[root@master01 ~]#  curl http://192.168.3.30:30111
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master01 ~]# 

 类似资料: