当前位置: 首页 > 工具软件 > k3s > 使用案例 >

《k3s 源码解析3 ---- k3s集群搭建》

诸葛立果
2023-12-01

一、安装说明:

主机名

要求主机名不能一样,如果主机名一 样,可以通过参数--with-node-id添加一个随机后缀, 或者通过参数--node-name或者环境变量$K3S_NODE_NAME指定主机名

操作系统
基本上可以跑在所有的Linux系统上,官方支持并测试的系统包括

  • Ubuntu 16.04 (amd64)
  • Ubuntu 18.04 (amd64)
  • Raspbian Buster*

硬件资源

  • CPU:1
  • 内存:512MB(建议至少1G)
  • 磁盘:K3s性能依赖数据库的性能,建议跑在SSD上

端口功能列表
需要监听和开放的默认端口如下:

网络协议端口对应服务功能描述
TCP6443K3s agent nodesKubernetes API
UCP8472K3s server and agent nodesRequired only for Flannel VXLAN
TCP10250K3s server and agent nodeskubelet

指定网络插件
k3s 默认使用Flannel vxlan作为CNI提供容器网络,可以通过如下启动参数指定k3s使用的网络插件类型:

CLI FLAG AND VALUEDESCRIPTION
–flannel-backend=vxlan使用vxlan(默认).
–flannel-backend=ipsec使用IPSEC后端对网络流量进行加密.
–flannel-backend=host-gw使用host_gw模式.
–flannel-backend=wireguard使用WireGuard后端对网络流量进行加密。 可能需要其他内核模块和配置.

如果用独立的CNI可以在安装时指定参数--flannel-backend=none, 然后单独安装自己的CNI

 二、本实验节点(cpu异构)环境:

iprole
192.168.0.47 (x86节点)k3s server (m8s master) node
192.168.0.16 (arm64 节点)k3s agent (k8s worker) node

 三、server安装:

1) 部署k3s:

1.1 默认数据库部署k3s server:

tsc@k8s-master:~$ sudo curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
# 会作为一个systemd服务运行, 支持开机自启动
# 会安装额外的工具,包括kubectl, crictl, ctr, k3s-killall.sh和k3s-uninstall.sh
# 会生成kubeconfig配置文件/etc/rancher/k3s/k3s.yaml

# 查看k3s server状态
tsc@k8s-master:~$ sudo systemctl status k3s

# 查看server token()
tsc@k8s-master:~$ sudo cat /var/lib/rancher/k3s/server/node-token
K10e222d41fa57ec6fb0ba020aa8633625728b672bc63a1a8c044f1cc45d9c9d489::server:efde08a2190c4622594abe0d3abeec41
tsc@k8s-master:~$

# 杀死k3s server/agent
tsc@k8s-master:~$ sudo k3s-killall.sh
  • K3S_URL是Server节点的IP地址, 默认端口是6443
  • K3S_TOKEN存储在Server节点的/var/lib/rancher/k3s/server/node-token文件中

1.2 使用其它数据库部署k3s server:

# 使用MySQL数据库
curl -sfL https://get.k3s.io | sh -s - server \
 --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
 
# 使用PG数据库
curl -sfL https://get.k3s.io | sh -s - server \
 --datastore-endpoint="postgres://username:password@hostname:port/database-name"
 
# 使用etcd集群数据库
curl -sfL https://get.k3s.io | sh -s - server \
 --datastore-endpoint="https://etcd-host-1:2379,https://etcd-host-2:2379,https://etcd-host-3:2379"

1.3 HA模式部署(内置数据库):

这种模式下,Server节点个数必须是奇数个,推荐是三个Server节点。

1.3.1 启动第一个Server节点的时候需要携带--cluster-init参数,以及K3S_TOKEN

K3S_TOKEN=SECRET k3s server --cluster-init 

1.3.2 然后在启动其他Server节点:

K3S_TOKEN=SECRET k3s server --server https://<ip or hostname of server1>:6443

2) 修改k3s service 启动参数(指定网卡):

server节点是多网卡的情况下,需要指定使用哪个网卡(这里使用网卡:ens192),编辑server节点k3s服务(sudo vim /etc/systemd/system/k3s.service):

[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
    server --flannel-iface ens192 \

3) 修改k3s server api端点:

vim /etc/rancher/k3s/k3s.yaml文件,将server api 从 https://127.0.0.1:6443修改为 https://192.168.0.47:6443:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTlRBM056TTFNak13SGhjTk1qSXdOREkwTURReE1qQXpXaGNOTXpJd05ESXhNRFF4TWpBegpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTlRBM056TTFNak13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUVXNNMVRIWDFPSUgya3duSFlmVGxmeUl4b0VQb0RJVDFrVzJHWjRCbGQKdlpwQys2cEVuZmNMTUZvaVI5VWYyTlF5K0F5OVpkZ201YnhtSnZ1bjUvaWRvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUlOaklpTS85ZEhnQ0R5eTBPTGF5CmdvdW9qdjR3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnVDI0eUt4UGNTVDRueXFxZHBZR3RpVFlxdzZ1SXd0Yy8KeXpwb1VqRkxWSWtDSVFDMi8vS1hyKyttUEIrT1VzcFZpLys3TXdQMjVzZXlsQkx4aFc0T0ZJdFR3dz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.0.47:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJRElvMjdiMTRsK0F3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOalV3Tnpjek5USXpNQjRYRFRJeU1EUXlOREEwTVRJd00xb1hEVEl6TURReQpOREEwTVRJd00xb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJCYXRTRmN2RE85NXZkdFgKbG5DUk9nQ3huSTlubW5KL3U3UVI2MnlVK2lqL1ozVlBBd1NRdkYrMG10WDFod3hOeG9LS2ZyZGhMNTliQ0VrcwpCdFBhMm1DalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUTR5VnZhL25nMkNZUzJSTnp0ejRwbm1DR3BBREFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQTQwNGZJSFFhNnEvc3c2V01pdXZuNi9UbjNGZ3VhZVlqNDB5RHNCL2tnaFVDSVFEVWpoL3ZIRGFONThqbApxby9jWFpTUTJTd1ltMzBEWFQvVE1iSGVFbzhUbGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZURDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFMk5UQTNOek0xTWpNd0hoY05Nakl3TkRJME1EUXhNakF6V2hjTk16SXdOREl4TURReE1qQXoKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFMk5UQTNOek0xTWpNd1dUQVRCZ2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBVGxvUThYay92UW9hVVFRcFVJeGtZQ2tadlBIbnhPK0JCRkFUbisxOTVPCmVpK3c1Um0xSTIrdWgxK1dSb1JCT2U0bXFMYXRIZmZ5VzR2Y3kvMW9NN2NMbzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVPTWxiMnY1NE5nbUV0a1RjN2MrSwpaNWdocVFBd0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFNOWNvOXQvT2lBM2MvNm5hZlNSWkQwbmtKYVlBYzhYCjhNdnBUUzJuVDc5SkFpRUFzaExaSkdNMGhqOUZ0c0hLVC9FeHZZK0Z1RE1Nbi85ajN3cFJpWFRCa05vPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUFWY2NOL3l1Q2RUUUFvaDUxc1lURkZwTTN5K1lqRHVXSlIvSytkT3NpVzFvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFRnExSVZ5OE03M205MjFlV2NKRTZBTEdjajJlYWNuKzd0QkhyYkpUNktQOW5kVThEQkpDOApYN1NhMWZXSERFM0dnb3ArdDJFdm4xc0lTU3dHMDlyYVlBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=

重启k3s server:

 sudo systemctl daemon-reload
 sudo systemctl restart k3s

 四、agent 安装,并加入k3s server集群:

root@k8s-node1:~# curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | K3S_URL=https://192.168.0.47:6443 K3S_TOKEN=K10e222d41fa57ec6fb0ba020aa8633625728b672bc63a1a8c044f1cc45d9c9d489::server:efde08a2190c4622594abe0d3abeec41  INSTALL_K3S_MIRROR=cn sh -

 五、启动agent:

k3s agent --server https://192.168.0.47:6443 --token K10e222d41fa57ec6fb0ba020aa8633625728b672bc63a1a8c044f1cc45d9c9d489::server:efde08a2190c4622594abe0d3abeec41

 六、查看k3s 集群节点:

tsc@k8s-master:~$ sudo kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml  get nodes -o wide
NAME         STATUS   ROLES                  AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   11h   v1.22.7+k3s1   192.168.0.46   <none>        Ubuntu 18.04.5 LTS   4.15.0-176-generic   containerd://1.5.9-k3s1
k8s-node1    Ready    <none>                 10h   v1.22.7+k3s1   192.168.0.16   <none>        Ubuntu 18.04.2 LTS   4.4.103              containerd://1.5.9-k3s1
tsc@k8s-master:~$
 类似资料: