当前位置: 首页 > 工具软件 > Calico > 使用案例 >

第二篇:kubernetes部署calico网络插件

万俟宜修
2023-12-01

说明: 总的目标是在k8s集群部署gitlab、jenkins,并且在本地提交代码到gitlab后jenkin流水线可以自动编译打包成为docker镜像然后部署到k8s中并实现客户端外部域名访问,在文档分为多个部分,其中涉及的技术有docker安装、k8s搭建、部署gitlab、部署jenkins、部署sonarqube、gitlab和jenkin联动、jenkins和sonarqube联动、pipline脚本编写、istio部署、istio服务网关等…

此文档接第一篇:kubernetes部署实操

这篇文档讲解的是kubernetns部署calico网络插件

1. 下载calico的yaml文件

#下载文件
文件下载位置:calico.yaml

2. 上传文件到主节点k8s-master

上传文件calico-3.24.5.yaml到任意目录下(我这里上传到了/opt/k8s)

3. 执行部署calico

进入到calico-3.24.5.yaml目录下
执行命令: kubectl apply -f calico-3.24.5.yaml

// An highlighted block
root@k8s-master:/opt/k8s# kubectl apply -f calico-3.24.5.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
root@k8s-master:/opt/k8s#

4. 验证calico安装结果

上面calico资源安装完成后,可以使用命令"kubectl get pod -A"验证结果

结果如下:

// 主节点执行
root@k8s-master:/opt/k8s# kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-dbd6bd945-vlqxh   1/1     Running   0          4m51s
kube-system   calico-node-hkp5s                         1/1     Running   0          4m51s
kube-system   calico-node-ksxg8                         1/1     Running   0          4m51s
kube-system   calico-node-srd2r                         1/1     Running   0          4m51s
kube-system   coredns-7f6cbbb7b8-svhdg                  1/1     Running   0          22h
kube-system   coredns-7f6cbbb7b8-vmm2h                  1/1     Running   0          22h
kube-system   etcd-k8s-master                           1/1     Running   0          22h
kube-system   kube-apiserver-k8s-master                 1/1     Running   0          22h
kube-system   kube-controller-manager-k8s-master        1/1     Running   0          22h
kube-system   kube-proxy-4dvgq                          1/1     Running   0          22h
kube-system   kube-proxy-5jhkj                          1/1     Running   0          22h
kube-system   kube-proxy-5jtwf                          1/1     Running   0          22h
kube-system   kube-scheduler-k8s-master                 1/1     Running   0          22h

可以看到所有容器都已经是running状态, 说明安装成功

PS1:svc流量路由更改为IPVS模式

默认初始化完成后会使用宿主机的iptables作为k8s的svc流量转发, 这里我们将默认的iptables改为ipvs, 在这里我们也推荐使用ipvs
假设目前我们k8s集群已经在运行 修改方法如下:

ps1.1登录主节点k8s-master

ps1.2 执行命令:kubectl edit configmap kube-proxy -n kube-system

root@k8s-master:# kubectl edit configmap kube-proxy -n kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
//后面输出省略没有贴

注意: 执行上面命令后会用一般vim打开操作

PS1.3 更改mode的值为ipvs

找到mode那一行可以看到是空值, 在这我们我们改为ipvs即可 如下:

 ...
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: null
  ...

PS1.4 重启kube-proxy的pod

需要查询出k8s集群上各个节点上的kube-proxy名称, 并删除

//只在主节点执行
root@k8s-master:/# kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-dbd6bd945-vlqxh   1/1     Running   0          23m
kube-system   calico-node-hkp5s                         1/1     Running   0          23m
kube-system   calico-node-ksxg8                         1/1     Running   0          23m
kube-system   calico-node-srd2r                         1/1     Running   0          23m
kube-system   coredns-7f6cbbb7b8-svhdg                  1/1     Running   0          22h
kube-system   coredns-7f6cbbb7b8-vmm2h                  1/1     Running   0          22h
kube-system   etcd-k8s-master                           1/1     Running   0          22h
kube-system   kube-apiserver-k8s-master                 1/1     Running   0          22h
kube-system   kube-controller-manager-k8s-master        1/1     Running   0          22h
kube-system   kube-proxy-4dvgq                          1/1     Running   0          22h
kube-system   kube-proxy-5jhkj                          1/1     Running   0          22h
kube-system   kube-proxy-5jtwf                          1/1     Running   0          22h
kube-system   kube-scheduler-k8s-master                 1/1     Running   0          22h
root@k8s-master:/# kubectl delete pod kube-proxy-4dvgq kube-proxy-5jhkj kube-proxy-5jtwf -n kube-system
pod "kube-proxy-4dvgq" deleted
pod "kube-proxy-5jhkj" deleted
pod "kube-proxy-5jtwf" deleted

注意: 这里我们删除kube-proxuy后集群会自动重新拉起kube-proxy, 这时我们的配置就生效了

PS1.5 验证ipvs是否生效

#首先安装ipvsadm包

// 所有节点都需要执行
root@k8s-work01:~# apt-get install -y ipvsadm

#查看ipvs负载

root@k8s-master:~# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.100.230:6443         Masq    1      1          0
TCP  10.96.0.10:53 rr
  -> 172.16.178.130:53            Masq    1      0          0
  -> 172.16.178.131:53            Masq    1      0          0
TCP  10.96.0.10:9153 rr
  -> 172.16.178.130:9153          Masq    1      0          0
  -> 172.16.178.131:9153          Masq    1      0          0
UDP  10.96.0.10:53 rr
  -> 172.16.178.130:53            Masq    1      0          0
  -> 172.16.178.131:53            Masq    1      0          0

以上结果说明我们已经从iptables切换到ipvs

PS2:网络模式改为BGP

默认初始化完成后pod之间的通信网络使用的是ipip三层隧道, 这里我们将默认的ipip改为bgp, 在这里我们也推荐使用bgp
这里我们分两种情况:已部署和未部署calico 修改方法分别如下:

PS2.1 部署calico前 更改为BGP模式

#步骤1下载 calico文件后更改如下:

// 默认如下:
- name:CALICO_IPV4POOL_IPIP
  value:"Always"
// 更改后如下:
- name:CALICO_IPV4POOL_IPIP
  value:"Never"

Always :表示使用的是ipip模式
Never :表示使用的是BGP模式

#更改完成后接着执行其他上传和部署的步骤即可

PS2.2 已部署calico 更改为BGP模式

#执行命令:kubectl edit ippools.crd.projectcalico.org 然后更改“ipipMode: Always” 为“ipipMode: Never”

//主节点执行
root@k8s-master:~# kubectl edit ippools.crd.projectcalico.org
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  annotations:
    projectcalico.org/metadata: '{"uid":"f4e0b414-e336-416b-80af-2213ec524f8a","creationTimestamp":"2023-01-10T07:50:16Z"}'
  creationTimestamp: "2023-01-10T07:50:16Z"
  generation: 2
  name: default-ipv4-ippool
  resourceVersion: "118907"
  uid: bc9f0137-ae69-429a-b405-3bc8af970cf6
spec:
  allowedUses:
  - Workload
  - Tunnel
  blockSize: 26
  cidr: 172.18.0.0/16
  ipipMode: Never
  natOutgoing: true
  nodeSelector: all()
  vxlanMode: Never

注意:1 这里的编辑模式一般使用的是vim编辑器
2 更改完成后不需要做其他任何操作即可生效

PS2.3 验证是否已是BGP模式

#在我们更改前(默认使用ipip),我们使用命令route -n查看宿主机路由表 如下:

// 随便一个节点执行都将可以
root@k8s-master:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.100.2   0.0.0.0         UG    100    0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.178.128  192.168.100.232 255.255.255.192 UG    0      0        0 tunl0
172.18.235.192  0.0.0.0         255.255.255.192 U     0      0        0 *
172.18.236.0    192.168.100.231 255.255.255.192 UG    0      0        0 tunl0
192.168.100.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33

上面我们可以看到路由下一跳有tunl0

#更改为BGP模式后,我们使用命令route -n查看宿主机路由表 如下:

// 随便一个节点执行都将可以
root@k8s-master:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.100.2   0.0.0.0         UG    100    0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1000   0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.178.128  192.168.100.232 255.255.255.192 UG    0      0        0 ens33
172.18.235.192  0.0.0.0         255.255.255.192 U     0      0        0 *
172.18.236.0    192.168.100.231 255.255.255.192 UG    0      0        0 ens33
192.168.100.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33

上面我们可以看到,路由下一跳没有了tunl0, 说明我们更改成功

接下来一章将讲解部署监控插件metric,第三篇:kubernetes部署metric

 类似资料: