当前位置: 首页 > 工具软件 > helm-secrets > 使用案例 >

09 部署k3s和Helm-Rancher

糜帅
2023-12-01

部署k3s和Helm-Rancher

提供者:MappleZF

版本:1.0.0

一、部署k3s server节点

1.1 下载安装包

//https://github.com/rancher/k3s/releases

1.2 部署程序

[root@k3s:/root]# cp k3s /usr/local/bin/ && chmod +x  /usr/local/bin/k3s
[root@k3s:/root]# mkdir -p /var/lib/rancher/k3s/agent/images/
[root@k3s:/root]# cp k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/

1.3 系统设置

1.3.1 关闭防火墙
[root@k3s:/root]# systemctl stop firewalld && systemctl disable firewalld

1.3.2 内核优化
cat >> /etc/sysctl.d/k3s.conf << EOF
net.ipv4.tcp_fin_timeout = 2
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.ip_local_port_range = 4000    65000
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.route.gc_timeout = 100
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_synack_retries = 1
net.core.somaxconn = 16384
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_orphans = 16384
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
sysctl --system

1.4 初始化k3s server

[root@k3s:/usr/local/bin]# k3s server --docker --bind-address=192.168.7.157 --cluster-cidr=10.128.0.0/16 --service-cidr=10.129.0.0/16 --kube-apiserver-arg service-node-port-range=1000-65000 --write-kubeconfig=/root/.kube/config --write-kubeconfig-mode=644 --node-label asrole=worker

[root@k3s:/root]# k3s kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
k3s    Ready    master   4m16s   v1.18.6+k3s1

[root@k3s:/root]# k3s kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   metrics-server-7566d596c8-7xc9c          1/1     Running     0          3m32s
kube-system   helm-install-traefik-4whfn               0/1     Completed   0          3m32s
kube-system   local-path-provisioner-6d59f47c7-rb9wn   1/1     Running     0          3m32s
kube-system   coredns-8655855d6-m2qwk                  1/1     Running     0          3m32s
kube-system   traefik-758cd5fc85-lc9lp                 1/1     Running     0          2m39s
kube-system   svclb-traefik-hslbg                      2/2     Running     0          2m39s

参数说明:

● –docker: k3s server组件以containerd作为容器运行时。可以顺便在k3s server节点上启动一个agent节点,agent节点可以使用docker作为容器运行时,这样k3s server节点也可以当做工作节点用。当然也可以不在server节点上启动agent节点(添加参数–disable-agent即可)。
● –bind-address:k3s监听的IP地址,非必选,默认是localhost。
● –cluster-cidr:与kubernetes一样,也就是pod所在网络平面,非必选,默认是10.42.0.0/16.
● –service-cidr:与kubernetes一样,服务所在的网络平面,非必选,默认是10.43.0.0/16

● –kube-apiserver-arg:额外的api server配置参数,具体可以参考kuberntes官方网站了解支持的配置选项,非必选。
● –write-kubeconfig:安装时顺便写一个kubeconfig文件,方便使用kubectl工具直接访问。如果不加此参数,则默认的配置文件路径为/etc/rancher/k3s/k3s.yaml,默认只有root用户能读。
● –write-kubeconfig-mode:与–write-kubeconfig一起使用,指定kubeconfig文件的权限。
● –node-label:顺便给节点打上一个asrole=worker的label,非必选

1.5 配置k3s系统服务

[root@k3s:/root]# vim /etc/systemd/system/k3s.service
#!/bin/sh
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target

[Service]
Type=notify
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
EnvironmentFile=/etc/systemd/system/k3s.service.env
ExecStart=/usr/local/bin/k3s server --docker --bind-address=192.168.7.157 --cluster-cidr=10.128.0.0/16 --service-cidr=10.129.0.0/16 --kube-apiserver-arg service-node-port-range=1000-65000
KillMode=process
Delegate=yes
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target

建立环境变量文件
[root@k3s:/root]# touch /etc/systemd/system/k3s.service.env


设置服务开机自启
[root@k3s:/root]# systemctl enable k3s
[root@k3s:/root]# systemctl restart k3s
[root@k3s:/root]# systemctl status k3s

提醒:如果出现错误,可以通过journalctl -u k3s-server查看日志。

二、 部署k3s agent节点

2.1 查看token文件

提醒:server节点的/var/lib/rancher/k3s/server/目录下面生成node-token文件,存储k3s agent节点加入集群时所需的token.
[root@k3s:/root]#cat /var/lib/rancher/k3s/server/node-token
K10705439ea68f6630b07d5cb72ad8feb7d27de48808f5d1f72e32efda4fd271bca::server:02c516f6192c55b37e3313cbb3d00dea


2.2 导入离线镜像包

[root@k3sagent:/root]# docker load -i k3s-airgap-images-amd64.tar 

2.3 安装k3s-agent节点

root@k3sagent:/root]# cp k3s /usr/local/bin/ && chmod +x  /usr/local/bin/k3s
[root@k3sagent:/root]# cd /usr/local/bin/
[root@k3sagent:/usr/local/bin/t]# k3s agent --docker --server https://192.168.7.157:6443 --token K1010f8925857862aca97ab8020355840277513f8760b7da5702be2c3dab7f39d16::server:e2df70c37eb497ac19597849dc90fcde  --node-ip=192.168.7.172 --node-label asrole=worker

参数说明:

–docker:k3s agent以docker作为容器运行时。
–server:k3s server节点监听的url,必选参数。
–token:k3s server安装时生成的token,必选参数。
–node-ip:k3s agent节点的IP地址,非必选参数。
–node-label:同样给k3s agent节点打上一个asrole=worker的标签,非必选参数。

2.4 server节点查看

[root@k3s:/root]# k3s kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
k3sagent   Ready    <none>   4m36s   v1.18.6+k3s1
k3s        Ready    master   70m     v1.18.6+k3s1


2.5 配置k3s系统服务

[root@k3sagent:/root]# vim /usr/lib/systemd/system/k3s-agent.service
#!/bin/sh
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target

[Service]
Type=notify
EnvironmentFile=/etc/systemd/system/k3s.service.env
ExecStart=/usr/local/bin/k3s agent --docker --server https://192.168.7.171:6443 --token K1010f8925857862aca97ab8020355840277513f8760b7da5702be2c3dab7f39d16::server:e2df70c37eb497ac19597849dc90fcde  --node-ip=192.168.7.172 --node-label asrole=worker
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s

[Install]
WantedBy=multi-user.target


建立环境变量文件
[root@k3sagent:/root]# touch /etc/systemd/system/k3s.service.env


设置服务开机自启

[root@k3sagent:/root]# systemctl daemon-reload
[root@k3sagent:/root]# systemctl enable k3s-agent
[root@k3sagent:/root]# systemctl restart k3s-agent
[root@k3sagent:/root]# systemctl status k3s-agent.service 

三、部署Helm

Helm 3是一个在Kubernetes API上执行操作的工具,一个客户端工具

Helm使用与kubectl上下文相同的访问权限;你无需再使用helm init来初始化Helm;Release Name 位于命名空间中

3.1 下载安装包

github地址:https://github.com/helm/helm/releases/

[root@k8smaster01.host.com:/opt/src]# wget -c https://get.helm.sh/helm-v3.3.1-linux-amd64.tar.gz
[root@k8smaster01.host.com:/opt/src]# scp helm-v3.3.1-linux-amd64.tar.gz k8smaster03:/data/helm/
[root@k8smaster03.host.com:/data/helm]# tar -xf helm-v3.3.1-linux-amd64.tar.gz -C /data/helm/
[root@k8smaster03.host.com:/data/helm]# mv linux-amd64/* .

3.2 Helm初始设置

[root@k8smaster03.host.com:/data/helm]# cp helm /usr/local/bin/helm
[root@k8smaster03.host.com:/data/helm]# helm repo add stable  https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories
[root@k8smaster03.host.com:/data/helm]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
[root@k8smaster03.host.com:/data/helm]# cd ~/.cache/helm/repository/ && ls

3.3 检查Helm状态

[root@k8smaster03.host.com:/data/rancher/templates]# helm version
version.BuildInfo{Version:"v3.3.1", GitCommit:"249e5215cde0c3fa72e27eb7a30e8d55c9696144", GitTreeState:"clean", GoVersion:"go1.14.7"}

四、部署Rancher

4.1 添加Helm Chart仓库

[root@k8smaster03.host.com:/root]# helm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable
"rancher-stable" has been added to your repositories
[root@k8smaster03.host.com:/root]# helm repo list
NAME          	URL                                                                   
stable        	https://kubernetes-charts.storage.googleapis.com                      
rancher-stable	http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable

4.2 创建Rancher的Namespace

[root@k8smaster03.host.com:/data/rancher]# kubectl create namespace cattle-system
namespace/cattle-system created

4.3 拷贝证书至rancher目录

[root@k8smaster01.host.com:/data/ssl]# scp ca.pem k8smaster03:/data/rancher/

4.4 使用私有CA签发证书

[root@k8smaster03.host.com:/data/rancher]# mv ca.pem cacerts.pem
拷贝 CA 证书到名为 cacerts.pem 的文件,使用 kubectl 命令在 cattle-system 命名空间中创建名为 tls-ca 的密文。
[root@k8smaster03.host.com:/data/rancher]# kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem=./cacerts.pem
secret/tls-ca created
[root@k8smaster03.host.com:/data/rancher]# kubectl get secrets -n cattle-system
NAME                  TYPE                                  DATA   AGE
default-token-klns2   kubernetes.io/service-account-token   3      56m
tls-ca                Opaque                                1      26s

注意: Rancher 在启动时检索tls-ca密文。如果您的 Rancher Server 正在运行中,您需要重新启动 Rancher Server Pod 才能使新的 CA 生效

4.5 helm方式部署Rancher

kubectl -n cattle-system apply -R -f ./rancher

[root@k8smaster03.host.com:/data/rancher]# helm install rancher rancher-stable/rancher --namespace cattle-system  --set hostname=rancher.lowan.com  --set ingress.tls.source=secret --set privateCA=true
NAME: rancher
LAST DEPLOYED: Fri Sep 18 16:56:11 2020
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up.

Check out our docs at https://rancher.com/docs/rancher/v2.x/en/

Browse to https://rancher.lowan.com

Happy Containering!

4.6 验证Rancher Server是否已成功部署

[root@k8smaster03.host.com:/data/rancher]# kubectl -n cattle-system rollout status deploy/rancher
deployment "rancher" successfully rolled out
root@k8smaster03.host.com:/data/rancher]# kubectl -n cattle-system get deploy rancher
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
rancher   3/3     3            3           10m

4.7 卸载补充

补充:如果重新部署rancher可以参考如下
kubectl delete deployments/rancher -n cattle-system

./system-tools remove --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --namespace cattle-system


kubectl patch namespace cattle-system -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-system --grace-period=0 --force

kubectl patch namespace cattle-global-data -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-global-data --grace-period=0 --force

kubectl patch namespace local -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system

kubectl patch namespace cattle-global-nt -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-global-nt --grace-period=0 --force

for resource in `kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -o name -n local`; do kubectl patch $resource -p '{"metadata": {"finalizers": []}}' --type='merge' -n local; done

kubectl delete namespace local --grace-period=0 --force

./system-tools remove --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --namespace cattle-system


后续参考(集群):

01 kubernetes二进制部署
02 kubernetes辅助环境设置
03 K8S集群网络ACL规则
04 Ceph集群部署
05 部署zookeeper和kafka集群
06 部署日志系统
07 部署Indluxdb-telegraf
08 部署jenkins
09 部署k3s和Helm-Rancher
10 部署maven软件

 类似资料: