K8S
优质
小牛编辑
127浏览
2023-12-01
kubenet
kubenet 配置
1. K8S 安装sudo kubeadm init --pod-network-cidr=192.168.0.0/16
2. 查看默认的 network-plugin$ sudo cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"
3. 修改默认的 cni 网络到 kubenetsudo sed -i 's/cni/kubenet/' /var/lib/kubelet/kubeadm-flags.env
4. 重启 kubeletsudo systemctl restart kubelet.service
5. 验证 kubelet 使用的 network-plugin 为 kubenet$ ps -ef | grep kubelet
root 771 1 2 15:47 ? 00:00:42 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=kubenet --pod-infra-container-image=k8s.gcr.io/pause:3.2
6. 查看 nodes$ kubectl get nodes --no-headers
node-1 Ready control-plane,master 8m10s v1.20.5
7. 查看 Pods$ kubectl get pods --all-namespaces --no-headers
kube-system coredns-74ff55c5b-kjbw2 1/1 Running 0 8m40s
kube-system coredns-74ff55c5b-vc586 1/1 Running 0 8m40s
kube-system etcd-node-1 1/1 Running 1 8m56s
kube-system kube-apiserver-node-1 1/1 Running 2 8m56s
kube-system kube-controller-manager-node-1 1/1 Running 1 8m56s
kube-system kube-proxy-cxhxp 1/1 Running 1 8m40s
kube-system kube-scheduler-node-1 1/1 Running 1 8m56s
8. 查看主机空间网络$ ip a
...
4: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc htb state UP group default qlen 1000
link/ether 82:8d:06:e8:d7:13 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 brd 192.168.0.255 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::808d:6ff:fee8:d713/64 scope link
valid_lft forever preferred_lft forever
5: veth3b55bb42@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 66:19:1b:53:53:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::6419:1bff:fe53:5302/64 scope link
valid_lft forever preferred_lft forever
6: vethdfd6e306@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 8e:10:37:e9:f3:71 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::8c10:37ff:fee9:f371/64 scope link
valid_lft forever preferred_lft forever
Pod 内多容器下地址分配
1. 创建多容器 Podkubectl apply -f pod.yaml
2. 查看主机网络空间新增加的虚拟网卡$ ip a
...
13: vethe375804a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether d6:43:ff:72:e5:73 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::d443:ffff:fe72:e573/64 scope link
valid_lft forever preferred_lft forever
3. 登录 container-1 查看容器网络$ kubectl exec -it test -c container-1 -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether da:28:94:7b:5c:30 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
4. 登录 container-2 查看容器网络$ kubectl exec -it test -c container-2 -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether da:28:94:7b:5c:30 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
5. 删除 testkubectl delete -f pod.yaml
跨 NODE POD 通信
6. 创建两个 PODkubectl apply -f deployment.yaml
7. 查看主机网络空间新增加的虚拟接口$ ip a
...
14: vethe3d49bdf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 7a:0b:0b:e6:ed:55 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::780b:bff:fee6:ed55/64 scope link
valid_lft forever preferred_lft forever
15: veth437f1591@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cbr0 state UP group default
link/ether 66:9f:3f:3e:e7:f6 brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::649f:3fff:fe3e:e7f6/64 scope link
valid_lft forever preferred_lft forever
9. 查看 POD 1 网络$ kubectl exec -it test-6dbc498c76-n4sss -c container-1 -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether e2:a0:5c:6a:43:2f brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
10. 查看 POD 2 网络$ kubectl exec -it test-6dbc498c76-46vk6 -c container-1 -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 36:c5:0b:93:be:a4 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.12/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
11. K8S 节点上 tcpdump 捕获 icmp 包sudo tcpdump -nni cbr0 icmp
12. 在 POD 1 的 container-1 容器 ping POD 2 的 container-1ping 192.168.0.12
Cluster IP 类型 Service
1. 查看 Service IP 段$ ps -ef | grep apiserver | grep service-cluster-ip-range
root 5626 5597 4 20:43 ? 00:06:25 kube-apiserver --advertise-address=10.1.10.9 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
Note | --service-cluster-ip-range=10.96.0.0/12 . |
kubectl apply -f service.yaml
3. 查看创建的 POD 名称$ kubectl get pods --no-headers | awk '{print $1}'
test-service-6f6f8db499-ntkcc
test-service-6f6f8db499-s2dwn
4. 查看 Service IP$ kubectl get svc test-service --no-headers | awk '{print $3}'
10.107.168.72
5. 访问服务$ for i in {1..5} ; do curl 10.107.168.72 ; done
test-service-6f6f8db499-s2dwn
test-service-6f6f8db499-ntkcc
test-service-6f6f8db499-s2dwn
test-service-6f6f8db499-ntkcc
test-service-6f6f8db499-s2dwn
6. 添加一条 iptables 规则,方向 POD 访问 Servicesudo iptables -I FORWARD 2 -j ACCEPT
7. 创建一个临时 POD,访问测试$ kubectl run -it --rm --restart=Never busybox --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ # wget -S -O - 10.107.168.72
/ # wget -S -O - 192.168.0.20:9376
Cluster IP 类型 Service 访问调试
1. 创建服务kubectl apply -f echoserver.yaml
2. 查看 SERVICE 及 POD IP$ kubectl get svc echoserver --no-headers
echoserver ClusterIP 10.106.23.233 <none> 8877/TCP 45s
$ kubectl get pods -o wide --no-headers
echoserver-6dbbc8d5fc-f455t 1/1 Running 0 3m24s 192.168.0.33 node-1 <none> <none>
echoserver-6dbbc8d5fc-n4smh 1/1 Running 0 3m24s 192.168.0.34 node-1 <none> <none>
3. nat 表中 PREROUTING 规则$ sudo iptables -t nat -vnL PREROUTING
Chain PREROUTING (policy ACCEPT 338 packets, 15210 bytes)
pkts bytes target prot opt in out source destination
521 24674 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
2 128 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
4. nat 表中 KUBE-SERVICES 规则$ sudo iptables -t nat -vnL KUBE-SERVICES
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-MARK-MASQ udp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.106.23.233 /* default/echoserver cluster IP */ tcp dpt:8877
0 0 KUBE-SVC-HOYURHXRFA5BUYEO tcp -- * * 0.0.0.0/0 10.106.23.233 /* default/echoserver cluster IP */ tcp dpt:8877
537 31690 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
$ sudo iptables -t nat -vnL KUBE-SERVICES | grep 10.106.23.233
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.106.23.233 /* default/echoserver cluster IP */ tcp dpt:8877
0 0 KUBE-SVC-HOYURHXRFA5BUYEO tcp -- * * 0.0.0.0/0 10.106.23.233 /* default/echoserver cluster IP */ tcp dpt:8877
5. nat 表中 KUBE-SVC- 规则$ sudo iptables -t nat -vnL KUBE-SVC-HOYURHXRFA5BUYEO
Chain KUBE-SVC-HOYURHXRFA5BUYEO (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-652URVIXIJWATNFG all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-ASOAWBDFEODJJPJH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */
6. nat 表中 KUBE-SEP- 规则$ sudo iptables -t nat -vnL KUBE-SEP-652URVIXIJWATNFG
Chain KUBE-SEP-652URVIXIJWATNFG (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.33 0.0.0.0/0 /* default/echoserver */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */ tcp to:192.168.0.33:8877
$ sudo iptables -t nat -vnL KUBE-SEP-ASOAWBDFEODJJPJH
Chain KUBE-SEP-ASOAWBDFEODJJPJH (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.34 0.0.0.0/0 /* default/echoserver */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */ tcp to:192.168.0.34:8877
7. 调整 echoserver 为 3 replicas$ kubectl get pod -o wide --no-headers
echoserver-6dbbc8d5fc-hqxdv 1/1 Running 0 13m 192.168.0.33 node-1 <none> <none>
echoserver-6dbbc8d5fc-kj27r 1/1 Running 0 13m 192.168.0.34 node-1 <none> <none>
echoserver-6dbbc8d5fc-tgj24 1/1 Running 0 6s 192.168.0.35 node-1 <none> <none>
8. nat 表中 KUBE-SVC- 规则$ sudo iptables -t nat -vnL KUBE-SVC-HOYURHXRFA5BUYEO
Chain KUBE-SVC-HOYURHXRFA5BUYEO (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-652URVIXIJWATNFG all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-ASOAWBDFEODJJPJH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-7ZRSXHFJXB4D6W3U all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */
9. nat 表中 KUBE-SEP- 规则(新增)$ sudo iptables -t nat -vnL KUBE-SEP-7ZRSXHFJXB4D6W3U
Chain KUBE-SEP-7ZRSXHFJXB4D6W3U (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.35 0.0.0.0/0 /* default/echoserver */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echoserver */ tcp to:192.168.0.35:8877
基于 ClientIP 类型的 Service
1. 创建 Servicekubectl apply -f clientip.yaml
clientip.yaml
$ kubectl get svc test-clientip --no-headers
test-clientip ClusterIP 10.107.215.65 <none> 80/TCP 7h26m
$ kubectl get pods -o wide --no-headers
test-clientip-55c6c8ddcd-2ntlk 1/1 Running 0 7h27m 192.168.0.37 node-1 <none> <none>
test-clientip-55c6c8ddcd-ktlxt 1/1 Running 0 7h27m 192.168.0.36 node-1 <none> <none>
3. 访问服务$ for i in {1..5} ; do curl 10.107.215.65 ; done
test-clientip-55c6c8ddcd-2ntlk
test-clientip-55c6c8ddcd-2ntlk
test-clientip-55c6c8ddcd-2ntlk
test-clientip-55c6c8ddcd-2ntlk
test-clientip-55c6c8ddcd-2ntlk
4. nat 表中 PREROUTING 规则$ sudo iptables -t nat -vnL PREROUTING
Chain PREROUTING (policy ACCEPT 612 packets, 27540 bytes)
pkts bytes target prot opt in out source destination
3258 149K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
2 128 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
5. nat 表中 KUBE-SERVICES 规则$ sudo iptables -t nat -vnL KUBE-SERVICES
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-MARK-MASQ udp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
8 480 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.107.215.65 /* default/test-clientip cluster IP */ tcp dpt:80
8 480 KUBE-SVC-JASYFCTGROL6PGNE tcp -- * * 0.0.0.0/0 10.107.215.65 /* default/test-clientip cluster IP */ tcp dpt:80
814 48164 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
$ sudo iptables -t nat -vnL KUBE-SERVICES | grep 10.107.215.65
8 480 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.107.215.65 /* default/test-clientip cluster IP */ tcp dpt:80
8 480 KUBE-SVC-JASYFCTGROL6PGNE tcp -- * * 0.0.0.0/0 10.107.215.65 /* default/test-clientip cluster IP */ tcp dpt:80
6. nat 表中 KUBE-SVC- 规则(recent: CHECK seconds: 10800 reap name: KUBE-SEP-2WE6A5EBAO3UGN4N side: source mask: 255.255.255.255)$ sudo iptables -t nat -vnL KUBE-SVC-JASYFCTGROL6PGNE
Chain KUBE-SVC-JASYFCTGROL6PGNE (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-2WE6A5EBAO3UGN4N all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-clientip */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-2WE6A5EBAO3UGN4N side: source mask: 255.255.255.255
7 420 KUBE-SEP-LXKS3SWKA3X476YD all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-clientip */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-LXKS3SWKA3X476YD side: source mask: 255.255.255.255
0 0 KUBE-SEP-2WE6A5EBAO3UGN4N all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-clientip */ statistic mode random probability 0.50000000000
1 60 KUBE-SEP-LXKS3SWKA3X476YD all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-clientip */
7. nat 表中 KUBE-SEP- 规则$ sudo iptables -t nat -vnL KUBE-SEP-2WE6A5EBAO3UGN4N
Chain KUBE-SEP-2WE6A5EBAO3UGN4N (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.36 0.0.0.0/0 /* default/test-clientip */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-clientip */ recent: SET name: KUBE-SEP-2WE6A5EBAO3UGN4N side: source mask: 255.255.255.255 tcp to:192.168.0.36:9376
$ sudo iptables -t nat -vnL KUBE-SEP-LXKS3SWKA3X476YD
Chain KUBE-SEP-LXKS3SWKA3X476YD (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.37 0.0.0.0/0 /* default/test-clientip */
8 480 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-clientip */ recent: SET name: KUBE-SEP-LXKS3SWKA3X476YD side: source mask: 255.255.255.255 tcp to:192.168.0.37:9376
通过路由表向外网发布 Cluster IP 类型 Service
1. 创建 Servicekubectl apply -f service.yaml
2. 查看 Node IP, Service IP,Pod IP$ kubectl get node -o wide --no-headers
node-1 Ready control-plane,master 15h v1.20.5 10.1.10.9 <none> Ubuntu 18.04 LTS 4.15.0-140-generic docker://20.10.3
$ kubectl get svc test-service --no-headers
test-service ClusterIP 10.106.235.190 <none> 80/TCP 112s
$ kubectl get pods -o wide --no-headers
test-service-6f6f8db499-6j7nm 1/1 Running 0 2m24s 192.168.0.38 node-1 <none> <none>
test-service-6f6f8db499-m8lsx 1/1 Running 0 2m24s 192.168.0.39 node-1 <none> <none>
3. 查看 Service 网络$ ps -ef | grep kubelet | grep service-cluster-ip-range
root 2582 2554 4 08:26 ? 00:03:03 kube-apiserver --advertise-address=10.1.10.9 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
4. 在 10.1.10.8 上配置路由# ip r
default via 10.1.10.2 dev ens33 proto static metric 100
10.1.10.0/24 dev ens33 proto kernel scope link src 10.1.10.8 metric 100
# ip route add 10.96.0.0/12 via 10.1.10.9
# ip r
default via 10.1.10.2 dev ens33 proto static metric 100
10.1.10.0/24 dev ens33 proto kernel scope link src 10.1.10.8 metric 100
10.96.0.0/12 via 10.1.10.9 dev ens33
5. 在 10.1.10.8 上访问 test-servicecurl 10.106.235.190
通过 External IP 向外网发布 Cluster IP 类型 Service
1. 创建一个 External IP Servicekubectl apply -f externalip.yaml
2. 查看创建的 Service$ kubectl get svc test-externalip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-externalip ClusterIP 10.97.132.81 10.1.10.9 80/TCP 101s
3. 通过 EXTERNAL-IP 访问服务$ for i in {1..5} ; do curl 10.1.10.9 ; done
test-externalip-8fc497f8-jncpv
test-externalip-8fc497f8-jncpv
test-externalip-8fc497f8-phldw
test-externalip-8fc497f8-phldw
test-externalip-8fc497f8-phldw
4. nat 表中 PREROUTING 规则$ sudo iptables -t nat -vnL PREROUTING
Chain PREROUTING (policy ACCEPT 1165 packets, 52425 bytes)
pkts bytes target prot opt in out source destination
8114 369K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
4 296 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
5. nat 表中 KUBE-SERVICES 规则(新增加了两条规则)$ sudo iptables -t nat -vnL KUBE-SERVICES
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-MARK-MASQ udp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.132.81 /* default/test-externalip cluster IP */ tcp dpt:80
0 0 KUBE-SVC-CITWPFL6QQOR27AK tcp -- * * 0.0.0.0/0 10.97.132.81 /* default/test-externalip cluster IP */ tcp dpt:80
27 1700 KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 10.1.10.9 /* default/test-externalip external IP */ tcp dpt:80
20 1280 KUBE-SVC-CITWPFL6QQOR27AK tcp -- * * 0.0.0.0/0 10.1.10.9 /* default/test-externalip external IP */ tcp dpt:80 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
7 420 KUBE-SVC-CITWPFL6QQOR27AK tcp -- * * 0.0.0.0/0 10.1.10.9 /* default/test-externalip external IP */ tcp dpt:80 ADDRTYPE match dst-type LOCAL
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
1429 84328 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
$ sudo iptables -t nat -vnL KUBE-SERVICES | grep 10.97.132.81
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.132.81 /* default/test-externalip cluster IP */ tcp dpt:80
0 0 KUBE-SVC-CITWPFL6QQOR27AK tcp -- * * 0.0.0.0/0 10.97.132.81 /* default/test-externalip cluster IP */ tcp dpt:80
$ sudo iptables -t nat -vnL KUBE-SERVICES | grep 10.1.10.9
27 1700 KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 10.1.10.9 /* default/test-externalip external IP */ tcp dpt:80
20 1280 KUBE-SVC-CITWPFL6QQOR27AK tcp -- * * 0.0.0.0/0 10.1.10.9 /* default/test-externalip external IP */ tcp dpt:80 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
7 420 KUBE-SVC-CITWPFL6QQOR27AK tcp -- * * 0.0.0.0/0 10.1.10.9 /* default/test-externalip external IP */ tcp dpt:80 ADDRTYPE match dst-type LOCAL
6. nat 表中 KUBE-SVC- 规则$ sudo iptables -t nat -vnL KUBE-SVC-CITWPFL6QQOR27AK
Chain KUBE-SVC-CITWPFL6QQOR27AK (3 references)
pkts bytes target prot opt in out source destination
14 884 KUBE-SEP-RRILQQHBGE5IMDI4 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-externalip */ statistic mode random probability 0.50000000000
13 816 KUBE-SEP-JRIE3IXDMRY6BNG5 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-externalip */
7. nat 表中 KUBE-SEP- 规则$ sudo iptables -t nat -vnL KUBE-SEP-RRILQQHBGE5IMDI4
Chain KUBE-SEP-RRILQQHBGE5IMDI4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.40 0.0.0.0/0 /* default/test-externalip */
14 884 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-externalip */ tcp to:192.168.0.40:9376
$ sudo iptables -t nat -vnL KUBE-SEP-JRIE3IXDMRY6BNG5
Chain KUBE-SEP-JRIE3IXDMRY6BNG5 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.41 0.0.0.0/0 /* default/test-externalip */
13 816 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-externalip */ tcp to:192.168.0.41:9376
NodePort 类型 Service
1. 创建 NodePort 类型 Servicekubectl apply -f nodeport.yaml
2. 查看创建的 Service 和 Pod$ kubectl get svc test-nodeport --no-headers
test-nodeport NodePort 10.97.231.111 <none> 80:32228/TCP 98s
$ kubectl get pods -o wide --no-headers
test-nodeport-5d4bdfc7c7-4kftd 1/1 Running 0 2m38s 192.168.0.42 node-1 <none> <none>
test-nodeport-5d4bdfc7c7-s2jz5 1/1 Running 0 2m38s 192.168.0.43 node-1 <none> <none>
3. 访问服务$ for i in {1..5} ; do curl 10.1.10.9:32228 ; done
test-nodeport-5d4bdfc7c7-s2jz5
test-nodeport-5d4bdfc7c7-s2jz5
test-nodeport-5d4bdfc7c7-4kftd
test-nodeport-5d4bdfc7c7-4kftd
test-nodeport-5d4bdfc7c7-4kftd
NodePort 类型 Service 访问调试
1. 创建 NodePort 类型 Servicekubectl apply -f nodeport.yaml
2. 查看创建的 Service 和 Pod$ kubectl get svc test-nodeport --no-headers
test-nodeport NodePort 10.97.231.111 <none> 80:32228/TCP 98s
$ kubectl get pods -o wide --no-headers
test-nodeport-5d4bdfc7c7-4kftd 1/1 Running 0 2m38s 192.168.0.42 node-1 <none> <none>
test-nodeport-5d4bdfc7c7-s2jz5 1/1 Running 0 2m38s 192.168.0.43 node-1 <none> <none>
3. 访问服务$ for i in {1..1000} ; do curl 10.1.10.9:32228 ; done
4. nat 表中 PREROUTING 规则$ sudo iptables -t nat -vnL PREROUTING
Chain PREROUTING (policy ACCEPT 422 packets, 18990 bytes)
pkts bytes target prot opt in out source destination
15548 799K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
4 296 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
5. nat 表中 KUBE-SERVICES 规则(Cluster IP 规则依然存在,新增 KUBE-NODEPORTS 链)$ sudo iptables -t nat -vnL KUBE-SERVICES
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-SVC-JD5MR3NA4I4DYORP tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
0 0 KUBE-MARK-MASQ udp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- * * 0.0.0.0/0 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.231.111 /* default/test-nodeport cluster IP */ tcp dpt:80
0 0 KUBE-SVC-CIFSXFMKAAMIL4QG tcp -- * * 0.0.0.0/0 10.97.231.111 /* default/test-nodeport cluster IP */ tcp dpt:80
5798 367K KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
$ sudo iptables -t nat -vnL KUBE-SERVICES | grep 10.97.231.111
0 0 KUBE-MARK-MASQ tcp -- * * !192.168.0.0/16 10.97.231.111 /* default/test-nodeport cluster IP */ tcp dpt:80
0 0 KUBE-SVC-CIFSXFMKAAMIL4QG tcp -- * * 0.0.0.0/0 10.97.231.111 /* default/test-nodeport cluster IP */ tcp dpt:80
$ sudo iptables -t nat -vnL KUBE-SERVICES | grep KUBE-NODEPORTS
6098 385K KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
6. nat 表中 KUBE-NODEPORTS 规则$ sudo iptables -t nat -vnL KUBE-NODEPORTS
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
5015 321K KUBE-MARK-MASQ tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-nodeport */ tcp dpt:32228
5015 321K KUBE-SVC-CIFSXFMKAAMIL4QG tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-nodeport */ tcp dpt:32228
7. nat 表中 KUBE-SVC- 规则$ sudo iptables -t nat -vnL KUBE-SVC-CIFSXFMKAAMIL4QG
Chain KUBE-SVC-CIFSXFMKAAMIL4QG (2 references)
pkts bytes target prot opt in out source destination
2560 164K KUBE-SEP-EEAMLDZD2ZLPIVQ3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-nodeport */ statistic mode random probability 0.50000000000
2455 157K KUBE-SEP-3C6WTWWWE5M27K7C all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-nodeport */
8. nat 表中 KUBE-SEP- 规则$ sudo iptables -t nat -vnL KUBE-SEP-EEAMLDZD2ZLPIVQ3
Chain KUBE-SEP-EEAMLDZD2ZLPIVQ3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.42 0.0.0.0/0 /* default/test-nodeport */
2560 164K DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-nodeport */ tcp to:192.168.0.42:9376
$ sudo iptables -t nat -vnL KUBE-SEP-3C6WTWWWE5M27K7C
Chain KUBE-SEP-3C6WTWWWE5M27K7C (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.43 0.0.0.0/0 /* default/test-nodeport */
2455 157K DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/test-nodeport */ tcp to:192.168.0.43:9376
9. 跨 Work Node SNAT 规则$ sudo iptables -t nat -vnL KUBE-MARK-MASQ
Chain KUBE-MARK-MASQ (15 references)
pkts bytes target prot opt in out source destination
5015 321K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
$ sudo iptables -t nat -vnL KUBE-POSTROUTING
Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
4258 228K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
5015 321K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
5015 321K MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */
K8S DNS
1. 创建服务kubectl apply -f dns.yaml
2. 查看创建的 Service 和 Pod$ kubectl get svc test-dns --no-headers
test-dns ClusterIP 10.106.139.47 <none> 80/TCP 96s
$ kubectl get pods -o wide --no-headers
test-dns-6bff6cbdc5-2n6jx 1/1 Running 0 2m17s 192.168.0.44 node-1 <none> <none>
test-dns-6bff6cbdc5-hq4fx 1/1 Running 0 2m17s 192.168.0.45 node-1 <none> <none>
3. 创建一个临时 POD,DNS 查询测试$ kubectl run -it --rm --restart=Never busybox --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ #
4. nslookup Service 域名/ # nslookup test-dns
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: test-dns.default.svc.cluster.local
Address: 10.106.139.47
/ # nslookup test-dns.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: test-dns.default.svc.cluster.local
Address: 10.106.139.47
5. nslookup PTR 记录/ # nslookup 10.106.139.47
Server: 10.96.0.10
Address: 10.96.0.10:53
47.139.106.10.in-addr.arpa name = test-dns.default.svc.cluster.local
/ # nslookup 192.168.0.44
Server: 10.96.0.10
Address: 10.96.0.10:53
44.0.168.192.in-addr.arpa name = 192-168-0-44.test-dns.default.svc.cluster.local
/ # nslookup 192.168.0.45
Server: 10.96.0.10
Address: 10.96.0.10:53
45.0.168.192.in-addr.arpa name = 192-168-0-45.test-dns.default.svc.cluster.local
6. nslookup lookup Pod 域名/ # nslookup 192-168-0-44.test-dns.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: 192-168-0-44.test-dns.default.svc.cluster.local
Address: 192.168.0.44
K8S HostPort
1. 创建 HostPort Podkubectl apply -f hostPort.yaml
2. 访问服务$ curl 10.1.10.9:8081
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
3. nat 表中 KUBE-HOSTPORTS 规则$ sudo iptables -t nat -vnL KUBE-HOSTPORTS
Chain KUBE-HOSTPORTS (2 references)
pkts bytes target prot opt in out source destination
1 60 KUBE-HP-KWJPLLZCGIIKHTTD tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* nginx_default hostport 8081 */ tcp dpt:8081
4. nat 表中 KUBE-HP- 规则$ sudo iptables -t nat -vnL KUBE-HP-KWJPLLZCGIIKHTTD
Chain KUBE-HP-KWJPLLZCGIIKHTTD (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.0.47 0.0.0.0/0 /* nginx_default hostport 8081 */
1 60 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* nginx_default hostport 8081 */ tcp to:192.168.0.47:80
K8S HostNetwork
1. 创建 HostNetwork Podkubectl apply -f hostNetwork.yaml
2. 访问服务$ curl 10.1.10.9
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Flannel
Flannel 配置
1. kubeadm 初始化集群sudo kubeadm init --pod-network-cidr=10.244.0.0/16
2. 安装网络插件kubectl apply -f kube-flannel-host-gw.yml
3. 加入一个 Work Node$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-1 Ready control-plane,master 84m v1.20.5
node-2 Ready <none> 7m9s v1.20.5
4. 查看安装结果$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74ff55c5b-dxwb6 1/1 Running 1 84m 10.244.0.4 node-1 <none> <none>
kube-system coredns-74ff55c5b-vv8bx 1/1 Running 1 84m 10.244.0.5 node-1 <none> <none>
kube-system etcd-node-1 1/1 Running 1 85m 10.1.10.9 node-1 <none> <none>
kube-system kube-apiserver-node-1 1/1 Running 1 85m 10.1.10.9 node-1 <none> <none>
kube-system kube-controller-manager-node-1 1/1 Running 1 85m 10.1.10.9 node-1 <none> <none>
kube-system kube-flannel-ds-v8n7m 1/1 Running 0 7m39s 10.1.10.10 node-2 <none> <none>
kube-system kube-flannel-ds-wsxps 1/1 Running 1 81m 10.1.10.9 node-1 <none> <none>
kube-system kube-proxy-24l9w 1/1 Running 1 84m 10.1.10.9 node-1 <none> <none>
kube-system kube-proxy-gsdwh 1/1 Running 0 7m39s 10.1.10.10 node-2 <none> <none>
kube-system kube-scheduler-node-1 1/1 Running 1 85m 10.1.10.9 node-1 <none> <none>
5. 重新 Schedule corednskubectl scale -n kube-system deploy/coredns --replicas=0
kubectl scale -n kube-system deploy/coredns --replicas=2
6. 再次查看安装结果$ kubectl get pods -n kube-system -o wide --no-headers | grep coredns
coredns-74ff55c5b-5jdgq 1/1 Running 0 32s 10.244.1.12 node-2 <none> <none>
coredns-74ff55c5b-gt5jh 1/1 Running 0 104s 10.244.0.6 node-1 <none> <none>
7. 查看 Master 主机网络$ ip a
...
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ee:54:ee:d0:94:7d brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::ec54:eeff:fed0:947d/64 scope link
valid_lft forever preferred_lft forever
7: veth38645991@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether b6:ad:8e:5b:19:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b4ad:8eff:fe5b:19cf/64 scope link
valid_lft forever preferred_lft forever
8. 查看 Worker 主机网络$ ip a
...
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a2:b6:d4:1d:2f:38 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::a0b6:d4ff:fe1d:2f38/64 scope link
valid_lft forever preferred_lft forever
15: veth30eb6ff0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether e6:4e:2d:6c:7a:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e44e:2dff:fe6c:7a06/64 scope link
valid_lft forever preferred_lft forever
host-gw
1. 查看主机 1 路由表$ ip route | grep 10.244
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.1.10.10 dev ens33
2. 查看主机 2 路由表$ ip route | grep 10.244
10.244.0.0/24 via 10.1.10.9 dev ens33
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
3. 创建测试应用kubectl apply -f busybox.yaml
4. 查看创建的 POD$ kubectl get pods -o wide --no-headers
test-7999578869-p5kbp 1/1 Running 0 6m47s 10.244.1.14 node-2 <none> <none>
test-7999578869-pkgtp 1/1 Running 0 4m31s 10.244.0.9 node-1 <none> <none>
5. 在主机 2 上查看网络空间$ ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:2f:33:85 brd ff:ff:ff:ff:ff:ff
inet 10.1.10.10/24 brd 10.1.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe2f:3385/64 scope link
valid_lft forever preferred_lft forever
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether c6:06:e2:8f:2e:25 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c406:e2ff:fe8f:2e25/64 scope link
valid_lft forever preferred_lft forever
6: veth1695e55f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni0 state UP group default
link/ether c6:a8:ac:da:08:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::c4a8:acff:feda:8e7/64 scope link
valid_lft forever preferred_lft forever
6. 开启三个终端,连接主机 2,tcpdump 过滤 icmp 数据包sudo tcpdump -nei ens33 icmp
sudo tcpdump -nei cni0 icmp
sudo tcpdump -nei veth1695e55f icmp
7. 主机 1 上进入 busybox 容器 ping 主机 2 上 POD IP$ kubectl exec -it test-7999578869-pkgtp -- sh
/ # ping 10.244.1.14 -c2
PING 10.244.1.14 (10.244.1.14): 56 data bytes
64 bytes from 10.244.1.14: seq=0 ttl=62 time=0.739 ms
64 bytes from 10.244.1.14: seq=1 ttl=62 time=1.106 ms
--- 10.244.1.14 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.739/0.922/1.106 ms
8. 分析步骤 6 三个终端上数据包信息$ sudo tcpdump -nei ens33 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
18:23:22.185063 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.244.0.9 > 10.244.1.14: ICMP echo request, id 11008, seq 0, length 64
18:23:22.185355 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.244.1.14 > 10.244.0.9: ICMP echo reply, id 11008, seq 0, length 64
18:23:23.185863 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.244.0.9 > 10.244.1.14: ICMP echo request, id 11008, seq 1, length 64
18:23:23.186051 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.244.1.14 > 10.244.0.9: ICMP echo reply, id 11008, seq 1, length 64
$ sudo tcpdump -nei cni0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
18:23:22.185150 c6:06:e2:8f:2e:25 > 36:3e:45:9e:50:a9, ethertype IPv4 (0x0800), length 98: 10.244.0.9 > 10.244.1.14: ICMP echo request, id 11008, seq 0, length 64
18:23:22.185344 36:3e:45:9e:50:a9 > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.14 > 10.244.0.9: ICMP echo reply, id 11008, seq 0, length 64
18:23:23.185957 c6:06:e2:8f:2e:25 > 36:3e:45:9e:50:a9, ethertype IPv4 (0x0800), length 98: 10.244.0.9 > 10.244.1.14: ICMP echo request, id 11008, seq 1, length 64
18:23:23.186042 36:3e:45:9e:50:a9 > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.14 > 10.244.0.9: ICMP echo reply, id 11008, seq 1, length 64
$ sudo tcpdump -nei veth1695e55f icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth1695e55f, link-type EN10MB (Ethernet), capture size 262144 bytes
18:23:22.185162 c6:06:e2:8f:2e:25 > 36:3e:45:9e:50:a9, ethertype IPv4 (0x0800), length 98: 10.244.0.9 > 10.244.1.14: ICMP echo request, id 11008, seq 0, length 64
18:23:22.185331 36:3e:45:9e:50:a9 > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.14 > 10.244.0.9: ICMP echo reply, id 11008, seq 0, length 64
18:23:23.185969 c6:06:e2:8f:2e:25 > 36:3e:45:9e:50:a9, ethertype IPv4 (0x0800), length 98: 10.244.0.9 > 10.244.1.14: ICMP echo request, id 11008, seq 1, length 64
18:23:23.186032 36:3e:45:9e:50:a9 > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.14 > 10.244.0.9: ICMP echo reply, id 11008, seq 1, length 64
三个接口都可以抓取到数据包
36:3e:45:9e:50:a9
为主机 2 上 POD MAC 地址c6:06:e2:8f:2e:25
为主机 2 上 linux bridge cni0 MAC 地址
Note | iptables 默认的规则会基于全局的考虑,上面抓包 cni0 → veth1695e55f 这个之间的转发是由于 iptables SNAT 规则导致的。 |
sudo ip r del 10.244.0.0/24 via 10.1.10.9
10. 等待几秒钟后在主机 2 上查看路由表$ ip r | grep 10.244
10.244.0.0/24 via 10.1.10.9 dev ens33
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1
Note | flannel host-gw 模式下,flannel 负责维护主机路由表。 |
切换 host-gw 到 vxlan
1. CoreDNS POD scale 到 0kubectl scale -n kube-system deploy/coredns --replicas=0
2. 删除 host-gwkubectl delete -f kube-flannel-host-gw.yml
3. 创建 vxlankubectl apply -f kube-flannel.yml
4. CoreDNS POD scale 到 2kubectl scale -n kube-system deploy/coredns --replicas=2
5. 查看所有容器$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74ff55c5b-chf5p 1/1 Running 0 2m9s 10.244.0.11 node-1 <none> <none>
kube-system coredns-74ff55c5b-rc24f 1/1 Running 0 93s 10.244.1.33 node-2 <none> <none>
kube-system etcd-node-1 1/1 Running 2 5h48m 10.1.10.9 node-1 <none> <none>
kube-system kube-apiserver-node-1 1/1 Running 2 5h48m 10.1.10.9 node-1 <none> <none>
kube-system kube-controller-manager-node-1 1/1 Running 2 5h48m 10.1.10.9 node-1 <none> <none>
kube-system kube-flannel-ds-tbnf5 1/1 Running 0 5m52s 10.1.10.9 node-1 <none> <none>
kube-system kube-flannel-ds-zm9d7 1/1 Running 0 5m52s 10.1.10.10 node-2 <none> <none>
kube-system kube-proxy-24l9w 1/1 Running 2 5h48m 10.1.10.9 node-1 <none> <none>
kube-system kube-proxy-gsdwh 1/1 Running 1 4h30m 10.1.10.10 node-2 <none> <none>
kube-system kube-scheduler-node-1 1/1 Running 2 5h48m 10.1.10.9 node-1 <none> <none>
6. 查看主机 1 网络空间$ ip a
...
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 86:37:cf:70:96:3d brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::8437:cfff:fe70:963d/64 scope link
valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 16:e0:b5:75:8c:4b brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::14e0:b5ff:fe75:8c4b/64 scope link
valid_lft forever preferred_lft forever
9: veth2053e67d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 86:92:b2:8b:1f:2f brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::8492:b2ff:fe8b:1f2f/64 scope link
valid_lft forever preferred_lft forever
7. 查看主机 2 网络空间$ ip a
...
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 86:37:cf:70:96:3d brd ff:ff:ff:ff:ff:ff
inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::8437:cfff:fe70:963d/64 scope link
valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 16:e0:b5:75:8c:4b brd ff:ff:ff:ff:ff:ff
inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::14e0:b5ff:fe75:8c4b/64 scope link
valid_lft forever preferred_lft forever
9: veth2053e67d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 86:92:b2:8b:1f:2f brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::8492:b2ff:fe8b:1f2f/64 scope link
valid_lft forever preferred_lft forever
VxLAN 调试
1. 创建测试应用kubectl apply -f busybox.yaml
2. 查看创建的 POD$ kubectl get pods -o wide --no-headers
test-7999578869-k4bn8 1/1 Running 0 63s 10.244.0.12 node-1 <none> <none>
test-7999578869-mlk49 1/1 Running 0 4m14s 10.244.1.34 node-2 <none> <none>
3. 查看主机 2 上的 VxLAN UDP 端口$ sudo netstat -antulop | grep 8472
udp 0 0 0.0.0.0:8472 0.0.0.0:* - off (0.00/0/0)
4. 查看主机 2 上的网络接口$ ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:2f:33:85 brd ff:ff:ff:ff:ff:ff
inet 10.1.10.10/24 brd 10.1.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe2f:3385/64 scope link
valid_lft forever preferred_lft forever
4: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether c6:06:e2:8f:2e:25 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c406:e2ff:fe8f:2e25/64 scope link
valid_lft forever preferred_lft forever
17: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 0e:46:36:ac:f6:d6 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 brd 10.244.1.0 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::c46:36ff:feac:f6d6/64 scope link
valid_lft forever preferred_lft forever
27: veth470beb22@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 52:b3:aa:80:1e:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::50b3:aaff:fe80:1ec4/64 scope link
valid_lft forever preferred_lft forever
5. 主机 2 上打开 5 个中断,依次执行如下抓包命令sudo tcpdump -nei ens33 port 8472
sudo tcpdump -nei ens33 icmp
sudo tcpdump -nei cni0 icmp
sudo tcpdump -nei flannel.1 icmp
sudo tcpdump -nei veth470beb22 icmp
6. 在主机 1 上的 POD 中 ping 主机 2 POD 的 IP$ kubectl exec -it test-7999578869-k4bn8 -- sh
/ # ping 10.244.1.34 -c2
PING 10.244.1.34 (10.244.1.34): 56 data bytes
64 bytes from 10.244.1.34: seq=0 ttl=62 time=0.657 ms
64 bytes from 10.244.1.34: seq=1 ttl=62 time=0.859 ms
--- 10.244.1.34 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.657/0.758/0.859 ms
7. 查看第 5 步骤的输出$ sudo tcpdump -nei ens33 port 8472
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
19:59:38.867705 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 148: 10.1.10.9.36389 > 10.1.10.10.8472: OTV, flags [I] (0x08), overlay 0, instance 1
16:e0:b5:75:8c:4b > 0e:46:36:ac:f6:d6, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 0, length 64
19:59:38.867967 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 148: 10.1.10.10.58430 > 10.1.10.9.8472: OTV, flags [I] (0x08), overlay 0, instance 1
0e:46:36:ac:f6:d6 > 16:e0:b5:75:8c:4b, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 0, length 64
19:59:39.868638 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 148: 10.1.10.9.36389 > 10.1.10.10.8472: OTV, flags [I] (0x08), overlay 0, instance 1
16:e0:b5:75:8c:4b > 0e:46:36:ac:f6:d6, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 1, length 64
19:59:39.868907 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 148: 10.1.10.10.58430 > 10.1.10.9.8472: OTV, flags [I] (0x08), overlay 0, instance 1
0e:46:36:ac:f6:d6 > 16:e0:b5:75:8c:4b, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 1, length 64
$ sudo tcpdump -nei ens33 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
$ sudo tcpdump -nei cni0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes
19:59:38.867909 c6:06:e2:8f:2e:25 > ee:8d:f9:4a:25:7d, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 0, length 64
19:59:38.867941 ee:8d:f9:4a:25:7d > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 0, length 64
19:59:39.868857 c6:06:e2:8f:2e:25 > ee:8d:f9:4a:25:7d, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 1, length 64
19:59:39.868890 ee:8d:f9:4a:25:7d > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 1, length 64
$ sudo tcpdump -nei flannel.1 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:59:38.867886 16:e0:b5:75:8c:4b > 0e:46:36:ac:f6:d6, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 0, length 64
19:59:38.867955 0e:46:36:ac:f6:d6 > 16:e0:b5:75:8c:4b, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 0, length 64
19:59:39.868837 16:e0:b5:75:8c:4b > 0e:46:36:ac:f6:d6, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 1, length 64
19:59:39.868896 0e:46:36:ac:f6:d6 > 16:e0:b5:75:8c:4b, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 1, length 64
$ sudo tcpdump -nei veth470beb22 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on veth470beb22, link-type EN10MB (Ethernet), capture size 262144 bytes
19:59:38.867916 c6:06:e2:8f:2e:25 > ee:8d:f9:4a:25:7d, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 0, length 64
19:59:38.867936 ee:8d:f9:4a:25:7d > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 0, length 64
19:59:39.868866 c6:06:e2:8f:2e:25 > ee:8d:f9:4a:25:7d, ethertype IPv4 (0x0800), length 98: 10.244.0.12 > 10.244.1.34: ICMP echo request, id 8448, seq 1, length 64
19:59:39.868885 ee:8d:f9:4a:25:7d > c6:06:e2:8f:2e:25, ethertype IPv4 (0x0800), length 98: 10.244.1.34 > 10.244.0.12: ICMP echo reply, id 8448, seq 1, length 64
8. 相关调试命令ip r
ip n
bridge fdb
brctl show
Reset Flannel VxLan
1. kubeadm resetsudo kubeadm reset
sudo rm $HOME/.kube/config
Note | sudo kubeadm reset 需要在 Node 节点上也执行。 |
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
Calico
Calico 配置
1. 初始化 K8Ssudo kubeadm init --pod-network-cidr=10.100.0.0/16
2. 安装验证$ kubectl get nodes --no-headers
node-1 Ready control-plane,master 13m v1.20.5
node-2 Ready <none> 110s v1.20.5
$ kubectl get pods --all-namespaces --no-headers
kube-system coredns-74ff55c5b-4khp6 0/1 ContainerCreating 0 13m
kube-system coredns-74ff55c5b-vkb45 0/1 ContainerCreating 0 13m
kube-system etcd-node-1 1/1 Running 0 14m
kube-system kube-apiserver-node-1 1/1 Running 0 14m
kube-system kube-controller-manager-node-1 1/1 Running 0 14m
kube-system kube-proxy-6mcfd 1/1 Running 0 2m22s
kube-system kube-proxy-qfdjp 1/1 Running 0 13m
kube-system kube-scheduler-node-1 1/1 Running 0 14m
3. 安装 Calico,设定 CALICO_IPV4POOL_CIDR 为 10.100.0.0/16,CALICO_IPV4POOL_IPIP 为 Neverkubectl apply -f calico.yaml
4. 验证网路安装结果$ kubectl get pods --all-namespaces -o wide --no-headers
kube-system calico-kube-controllers-69496d8b75-phggg 1/1 Running 0 3m27s 10.100.247.0 node-2 <none> <none>
kube-system calico-node-6prgd 1/1 Running 0 3m27s 10.1.10.10 node-2 <none> <none>
kube-system calico-node-lz8kx 1/1 Running 0 3m27s 10.1.10.9 node-1 <none> <none>
kube-system coredns-74ff55c5b-rq6l6 1/1 Running 0 64s 10.100.247.1 node-2 <none> <none>
kube-system coredns-74ff55c5b-wwmkq 1/1 Running 0 4m15s 10.100.84.128 node-1 <none> <none>
kube-system etcd-node-1 1/1 Running 0 4m31s 10.1.10.9 node-1 <none> <none>
kube-system kube-apiserver-node-1 1/1 Running 0 4m31s 10.1.10.9 node-1 <none> <none>
kube-system kube-controller-manager-node-1 1/1 Running 0 4m31s 10.1.10.9 node-1 <none> <none>
kube-system kube-proxy-bhdwp 1/1 Running 0 4m15s 10.1.10.9 node-1 <none> <none>
kube-system kube-proxy-mfww5 1/1 Running 0 4m7s 10.1.10.10 node-2 <none> <none>
kube-system kube-scheduler-node-1 1/1 Running 0 4m31s 10.1.10.9 node-1 <none> <none>
5. calicoctl 安装curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.18.1/calicoctl
chmod a+x calicoctl
sudo cp calicoctl /usr/local/bin/
6. calicoctl 查看 Nodes$ calicoctl get nodes -o wide
NAME ASN IPV4 IPV6
node-1 (64512) 10.1.10.9/24
node-2 (64512) 10.1.10.10/24
7. calicoctl 查看 BGP full mesh$ sudo calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.1.10.10 | node-to-node mesh | up | 03:18:42 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
Pure BGP 默认网络查看
1. 容器 IP$ kubectl get pods --all-namespaces -o wide | grep 10.100
kube-system calico-kube-controllers-69496d8b75-phggg 1/1 Running 0 26m 10.100.247.0 node-2 <none> <none>
kube-system coredns-74ff55c5b-rq6l6 1/1 Running 0 24m 10.100.247.1 node-2 <none> <none>
kube-system coredns-74ff55c5b-wwmkq 1/1 Running 0 27m 10.100.84.128 node-1 <none> <none>
2. node-1 网卡$ ip a
...
6: califee22f61266@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
3. node-2 网卡$ ip a
...
6: cali8de5ff87b7c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
7: cali1e472607f9f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
4. node-1 路由表$ ip r show
10.100.84.128 dev califee22f61266 scope link
blackhole 10.100.84.128/26 proto bird
10.100.247.0/26 via 10.1.10.10 dev ens33 proto bird
5. node-2 路由表$ ip r show
10.100.84.128/26 via 10.1.10.9 dev ens33 proto bird
10.100.247.0 dev cali8de5ff87b7c scope link
blackhole 10.100.247.0/26 proto bird
10.100.247.1 dev cali1e472607f9f scope link
Pure BGP 调试
1. 创建 PODkubectl apply -f busybox.yaml
2. 查看创建的 POD$ kubectl get pods -o wide --no-headers
test-7999578869-9nc25 1/1 Running 0 2m10s 10.100.84.130 node-1 <none> <none>
test-7999578869-9vbv6 1/1 Running 0 6m59s 10.100.247.3 node-2 <none> <none>
test-7999578869-b26kk 1/1 Running 0 6m59s 10.100.247.2 node-2 <none> <none>
3. node-1 新增虚拟网卡$ ip a
...
8: calida8798d2ad7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
4. node-2 新增虚拟网卡$ ip a
...
8: cali70df284fc76@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
9: cali79d4f06640e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
5. node-1 上 POD IP 地址$ kubectl exec -it test-7999578869-9nc25 -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 56:40:a9:1a:11:5c brd ff:ff:ff:ff:ff:ff
inet 10.100.84.130/32 brd 10.100.84.130 scope global eth0
valid_lft forever preferred_lft forever
6. node-1 上 POD 默认路由$ kubectl exec -it test-7999578869-9nc25 -- sh
/ # ip r
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
7. node-1 上 POD ping node-2 上 POD IP$ kubectl exec -it test-7999578869-9nc25 -- sh
/ # ping 10.100.247.3 -c3
PING 10.100.247.3 (10.100.247.3): 56 data bytes
64 bytes from 10.100.247.3: seq=0 ttl=62 time=0.749 ms
64 bytes from 10.100.247.3: seq=1 ttl=62 time=0.334 ms
64 bytes from 10.100.247.3: seq=2 ttl=62 time=0.680 ms
8. node-1 上 POD 查看 ARP$ kubectl exec -it test-7999578869-9nc25 -- sh
/ # arping 169.254.1.1
ARPING 169.254.1.1 from 10.100.84.130 eth0
Unicast reply from 169.254.1.1 [ee:ee:ee:ee:ee:ee] 0.017ms
Unicast reply from 169.254.1.1 [ee:ee:ee:ee:ee:ee] 0.020ms
Unicast reply from 169.254.1.1 [ee:ee:ee:ee:ee:ee] 0.029ms
Unicast reply from 169.254.1.1 [ee:ee:ee:ee:ee:ee] 0.030ms
9. node-1 tcpdump 抓包$ sudo tcpdump -nei calida8798d2ad7
[sudo] password for kylin:
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on calida8798d2ad7, link-type EN10MB (Ethernet), capture size 262144 bytes
12:44:11.298318 56:40:a9:1a:11:5c > ee:ee:ee:ee:ee:ee, ethertype ARP (0x0806), length 42: Request who-has 169.254.1.1 (ee:ee:ee:ee:ee:ee) tell 10.100.84.130, length 28
12:44:11.298334 ee:ee:ee:ee:ee:ee > 56:40:a9:1a:11:5c, ethertype ARP (0x0806), length 42: Reply 169.254.1.1 is-at ee:ee:ee:ee:ee:ee, length 28
12:44:12.298813 56:40:a9:1a:11:5c > ee:ee:ee:ee:ee:ee, ethertype ARP (0x0806), length 42: Request who-has 169.254.1.1 (ee:ee:ee:ee:ee:ee) tell 10.100.84.130, length 28
12:44:12.298827 ee:ee:ee:ee:ee:ee > 56:40:a9:1a:11:5c, ethertype ARP (0x0806), length 42: Reply 169.254.1.1 is-at ee:ee:ee:ee:ee:ee, length 28
跨主机节点 POD 通信
10. node-1 tcpdump 过滤 icmp 包$ sudo tcpdump -nei calida8798d2ad7 icmp
$ sudo tcpdump -nei ens33 icmp
11. node-2 tcpdump 过滤 icmp 包$ sudo tcpdump -nei ens33 icmp
$ sudo tcpdump -nei cali79d4f06640e icmp
13. node-1 上 POD ping node-2 上 POD IP$ kubectl exec -it test-7999578869-9nc25 -- sh
/ # ping 10.100.247.3 -c3
PING 10.100.247.3 (10.100.247.3): 56 data bytes
64 bytes from 10.100.247.3: seq=0 ttl=62 time=0.749 ms
64 bytes from 10.100.247.3: seq=1 ttl=62 time=0.334 ms
64 bytes from 10.100.247.3: seq=2 ttl=62 time=0.680 ms
14. 查看第 10 步骤 node-1 上抓包的结果$ sudo tcpdump -nei calida8798d2ad7 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on calida8798d2ad7, link-type EN10MB (Ethernet), capture size 262144 bytes
12:57:34.031776 56:40:a9:1a:11:5c > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 0, length 64
12:57:34.032503 ee:ee:ee:ee:ee:ee > 56:40:a9:1a:11:5c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 0, length 64
12:57:35.032249 56:40:a9:1a:11:5c > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 1, length 64
12:57:35.033223 ee:ee:ee:ee:ee:ee > 56:40:a9:1a:11:5c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 1, length 64
12:57:36.033044 56:40:a9:1a:11:5c > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 2, length 64
12:57:36.033983 ee:ee:ee:ee:ee:ee > 56:40:a9:1a:11:5c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 2, length 64
$ sudo tcpdump -nei ens33 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
12:57:34.031810 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 0, length 64
12:57:34.032483 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 0, length 64
12:57:35.032283 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 1, length 64
12:57:35.033144 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 1, length 64
12:57:36.033150 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 2, length 64
12:57:36.033946 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 2, length 64
15. 查看第 11 步骤 node-2 上抓包的结果$ sudo tcpdump -nei ens33 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
12:57:34.032749 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 0, length 64
12:57:34.033057 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 0, length 64
12:57:35.033352 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 1, length 64
12:57:35.033699 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 1, length 64
12:57:36.034305 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 2, length 64
12:57:36.034532 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 2, length 64
$ sudo tcpdump -nei cali79d4f06640e icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cali79d4f06640e, link-type EN10MB (Ethernet), capture size 262144 bytes
12:57:34.032891 ee:ee:ee:ee:ee:ee > 06:88:3d:d0:fa:c7, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 0, length 64
12:57:34.033043 06:88:3d:d0:fa:c7 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 0, length 64
12:57:35.033451 ee:ee:ee:ee:ee:ee > 06:88:3d:d0:fa:c7, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 1, length 64
12:57:35.033672 06:88:3d:d0:fa:c7 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 1, length 64
12:57:36.034398 ee:ee:ee:ee:ee:ee > 06:88:3d:d0:fa:c7, ethertype IPv4 (0x0800), length 98: 10.100.84.130 > 10.100.247.3: ICMP echo request, id 8960, seq 2, length 64
12:57:36.034521 06:88:3d:d0:fa:c7 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.84.130: ICMP echo reply, id 8960, seq 2, length 64
16. 跨主机 POD 之间通信路径
node-1/calida8798d2ad7
→ node-1/ens33
→ node-2/ens33
→ node-2/cali79d4f06640e
.
同一个主机上 POD 之间的通信
17. 进入 node-2 POD 并查看 IP$ kubectl exec -it test-7999578869-b26kk -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether f6:f3:fd:41:df:e6 brd ff:ff:ff:ff:ff:ff
inet 10.100.247.2/32 brd 10.100.247.2 scope global eth0
valid_lft forever preferred_lft forever
18. node-2 上 tcpmdump 过滤 icmpsudo tcpdump -nei ens33 icmp
sudo tcpdump -nei cali70df284fc76 icmp
sudo tcpdump -nei cali79d4f06640e icmp
19. 进入 node-2 POD ping 同节点 POD$ kubectl exec -it test-7999578869-b26kk -- sh
/ # ping 10.100.247.3 -c2
PING 10.100.247.3 (10.100.247.3): 56 data bytes
64 bytes from 10.100.247.3: seq=0 ttl=63 time=0.533 ms
64 bytes from 10.100.247.3: seq=1 ttl=63 time=0.375 ms
20. 查看 18 步抓包结果$ sudo tcpdump -nei ens33 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
$ sudo tcpdump -nei cali70df284fc76 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cali70df284fc76, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:32.741409 f6:f3:fd:41:df:e6 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.2 > 10.100.247.3: ICMP echo request, id 5888, seq 0, length 64
13:11:32.741485 ee:ee:ee:ee:ee:ee > f6:f3:fd:41:df:e6, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.247.2: ICMP echo reply, id 5888, seq 0, length 64
13:11:33.742592 f6:f3:fd:41:df:e6 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.2 > 10.100.247.3: ICMP echo request, id 5888, seq 1, length 64
13:11:33.742668 ee:ee:ee:ee:ee:ee > f6:f3:fd:41:df:e6, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.247.2: ICMP echo reply, id 5888, seq 1, length 64
$ sudo tcpdump -nei cali79d4f06640e icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cali79d4f06640e, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:32.741452 ee:ee:ee:ee:ee:ee > 06:88:3d:d0:fa:c7, ethertype IPv4 (0x0800), length 98: 10.100.247.2 > 10.100.247.3: ICMP echo request, id 5888, seq 0, length 64
13:11:32.741476 06:88:3d:d0:fa:c7 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.247.2: ICMP echo reply, id 5888, seq 0, length 64
13:11:33.742637 ee:ee:ee:ee:ee:ee > 06:88:3d:d0:fa:c7, ethertype IPv4 (0x0800), length 98: 10.100.247.2 > 10.100.247.3: ICMP echo request, id 5888, seq 1, length 64
13:11:33.742658 06:88:3d:d0:fa:c7 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.3 > 10.100.247.2: ICMP echo reply, id 5888, seq 1, length 64
21. 删除 busybox 调试 PODkubectl delete -f busybox.yaml
切换到 IP in IP 模式
1. 查看 IPIPMODE calicoctl get ipPool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR
default-ipv4-ippool 10.100.0.0/16 true Never Never false all()
2. 调整$ calicoctl get ipPool -o yaml > calico_ipPool.yaml
$ vim calico_ipPool.yaml
$ calicoctl apply -f calico_ipPool.yaml
3. 再次查看 IPIPMODE$ calicoctl get ipPool -o wide
NAME CIDR NAT IPIPMODE VXLANMODE DISABLED SELECTOR
default-ipv4-ippool 10.100.0.0/16 true Always Never false all()
4. node-1 查看新增虚拟网卡11: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.100.84.131/32 scope global tunl0
valid_lft forever preferred_lft forever
5. node-2 查看新增虚拟网卡30: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.100.247.22/32 scope global tunl0
valid_lft forever preferred_lft forever
6. node-1 查看隧道$ ip -d addr show dev tunl0
11: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0
ipip any remote any local any ttl inherit nopmtudisc numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.100.84.131/32 scope global tunl0
valid_lft forever preferred_lft forever
7. node-2 查看隧道$ ip -d addr show dev tunl0
30: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0 promiscuity 0
ipip any remote any local any ttl inherit nopmtudisc numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 10.100.247.22/32 scope global tunl0
valid_lft forever preferred_lft forever
IP in IP 调试
1. 创建测试应用kubectl apply -f busybox.yaml
2. 查看创建的 POD$ kubectl get pods -o wide --no-headers
test-7999578869-jr2w9 1/1 Running 0 2m4s 10.100.247.24 node-2 <none> <none>
test-7999578869-kx4hp 1/1 Running 0 95s 10.100.84.132 node-1 <none> <none>
test-7999578869-xkhlz 1/1 Running 0 2m4s 10.100.247.23 node-2 <none> <none>
3. node-1 设备网卡2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:10:a9:6c brd ff:ff:ff:ff:ff:ff
inet 10.1.10.9/24 brd 10.1.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe10:a96c/64 scope link
valid_lft forever preferred_lft forever
11: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.100.84.131/32 scope global tunl0
valid_lft forever preferred_lft forever
12: cali14de3927136@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
4. node-2 设备网卡2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:2f:33:85 brd ff:ff:ff:ff:ff:ff
inet 10.1.10.10/24 brd 10.1.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe2f:3385/64 scope link
valid_lft forever preferred_lft forever
30: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 10.100.247.22/32 scope global tunl0
valid_lft forever preferred_lft forever
31: calieb588c579e3@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
32: cali813929a4a7b@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
跨主机节点 POD 通信
5. node-1 POD IP$ kubectl exec -it test-7999578869-kx4hp -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue
link/ether 5e:56:ea:dc:5c:33 brd ff:ff:ff:ff:ff:ff
inet 10.100.84.132/32 brd 10.100.84.132 scope global eth0
valid_lft forever preferred_lft forever
6. node-2 POD IP$ kubectl exec -it test-7999578869-jr2w9 -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue
link/ether 76:22:a9:e4:88:04 brd ff:ff:ff:ff:ff:ff
inet 10.100.247.24/32 brd 10.100.247.24 scope global eth0
valid_lft forever preferred_lft forever
7. node-1 上 tcpdump 过滤 icmpsudo tcpdump -nei ens33 ip proto 4
sudo tcpdump -nei tunl0 icmp
sudo tcpdump -nei cali14de3927136 icmp
8. node-2 上 tcpdump 过滤 icmpsudo tcpdump -nei ens33 ip proto 4
sudo tcpdump -nei tunl0 icmp
sudo tcpdump -nei cali813929a4a7b icmp
9. node-1 上 POD ping node-2 上 POD$ kubectl exec -it test-7999578869-kx4hp -- sh
/ # ping 10.100.247.24 -c2
PING 10.100.247.24 (10.100.247.24): 56 data bytes
64 bytes from 10.100.247.24: seq=0 ttl=62 time=1.008 ms
64 bytes from 10.100.247.24: seq=1 ttl=62 time=0.781 ms
10. 查看第 8 步骤抓包结果$ sudo tcpdump -nei cali14de3927136 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cali14de3927136, link-type EN10MB (Ethernet), capture size 262144 bytes
14:09:11.589241 5e:56:ea:dc:5c:33 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 0, length 64
14:09:11.589903 ee:ee:ee:ee:ee:ee > 5e:56:ea:dc:5c:33, ethertype IPv4 (0x0800), length 98: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 0, length 64
14:09:12.589717 5e:56:ea:dc:5c:33 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 1, length 64
14:09:12.590404 ee:ee:ee:ee:ee:ee > 5e:56:ea:dc:5c:33, ethertype IPv4 (0x0800), length 98: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 1, length 64
$ sudo tcpdump -nei tunl0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes
14:09:11.589275 ip: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 0, length 64
14:09:11.589887 ip: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 0, length 64
14:09:12.589763 ip: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 1, length 64
14:09:12.590394 ip: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 1, length 64
$ sudo tcpdump -nei ens33 ip proto 4
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
14:09:11.589287 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 118: 10.1.10.9 > 10.1.10.10: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 0, length 64 (ipip-proto-4)
14:09:11.589829 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 118: 10.1.10.10 > 10.1.10.9: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 0, length 64 (ipip-proto-4)
14:09:12.589779 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 118: 10.1.10.9 > 10.1.10.10: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 1, length 64 (ipip-proto-4)
14:09:12.590353 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 118: 10.1.10.10 > 10.1.10.9: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 1, length 64 (ipip-proto-4)
11. 查看第 9 步骤抓包结果$ sudo tcpdump -nei ens33 ip proto 4
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
14:09:11.618270 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 118: 10.1.10.9 > 10.1.10.10: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 0, length 64 (ipip-proto-4)
14:09:11.618541 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 118: 10.1.10.10 > 10.1.10.9: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 0, length 64 (ipip-proto-4)
14:09:12.618795 00:0c:29:10:a9:6c > 00:0c:29:2f:33:85, ethertype IPv4 (0x0800), length 118: 10.1.10.9 > 10.1.10.10: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 1, length 64 (ipip-proto-4)
14:09:12.618999 00:0c:29:2f:33:85 > 00:0c:29:10:a9:6c, ethertype IPv4 (0x0800), length 118: 10.1.10.10 > 10.1.10.9: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 1, length 64 (ipip-proto-4)
$ sudo tcpdump -nei tunl0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes
14:09:11.618327 ip: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 0, length 64
14:09:11.618528 ip: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 0, length 64
14:09:12.618870 ip: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 1, length 64
14:09:12.618985 ip: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 1, length 64
$ sudo tcpdump -nei cali813929a4a7b icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on cali813929a4a7b, link-type EN10MB (Ethernet), capture size 262144 bytes
14:09:11.618375 ee:ee:ee:ee:ee:ee > 76:22:a9:e4:88:04, ethertype IPv4 (0x0800), length 98: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 0, length 64
14:09:11.618513 76:22:a9:e4:88:04 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 0, length 64
14:09:12.618891 ee:ee:ee:ee:ee:ee > 76:22:a9:e4:88:04, ethertype IPv4 (0x0800), length 98: 10.100.84.132 > 10.100.247.24: ICMP echo request, id 8704, seq 1, length 64
14:09:12.618973 76:22:a9:e4:88:04 > ee:ee:ee:ee:ee:ee, ethertype IPv4 (0x0800), length 98: 10.100.247.24 > 10.100.84.132: ICMP echo reply, id 8704, seq 1, length 64
12. 跨主机节点 POD 通信
node-1/cali14de3927136
→ node-1/tunl0
→ node-1/ens33
→ node-2/ems33
→ node-2/tunl0
→ node-2/cali813929a4a7b
.
Network Policy
TODO
NSX-T
**
**
**
**
**
**
**
**
**
**
**
**