当前位置: 首页 > 知识库问答 >
问题:

使用VPN在不同网络中运行节点的混合/异构Kubernetes集群

龚俭
2023-03-14

我的目标是建立一个混合/异构Kubernetes集群的模型,其中我有以下设置:

  • 主节点在AWS(cloud)-IP-172-31-28-6上运行
  • 从节点在我的笔记本电脑上运行-osbox
  • 从节点运行在Raspberry pi-edge-1上

在我的笔记本电脑上本地运行一个有三个VM的Kubernetes集群是没有问题的,并且与两个Weave Net都能很好地工作。然而,在对上面描述的Kubernetes集群建模时,存在一些通信问题(我猜)。

由于Kubernetes被设计为在节点上运行,因此所有节点都位于同一个网络中,所以我在AWS上设置了一个OpenVPN服务器,并用我的笔记本电脑和Raspberry Pi连接到它。我希望当从节点位于不同的网络中时,这将足以在异构设置上运行Kubernetes。当然,这是一个不正确的假设。

如果我在一个从节点上运行Kubernetes仪表板并尝试访问它,我会得到一个超时。如果我在主节点上运行它,一切都按预期工作。

我在AWS上使用kubeadm init--apiserver-adveredse-address=设置集群,并使用kubeadm join与节点连接。

$kubectl get pods--所有名称空间-o宽:

NAMESPACE     NAME                                     READY     STATUS              RESTARTS   AGE       IP              NODE
kube-system   etcd-ip-172-31-28-6                      1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   kube-apiserver-ip-172-31-28-6            1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   kube-controller-manager-ip-172-31-28-6   1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   kube-dns-6f4fd4bdf-w6ctf                 0/3       ContainerCreating   0          15h       <none>          osboxes
kube-system   kube-proxy-2pl2f                         1/1       Running             0          15h       172.31.28.6     ip-172-31-28-6
kube-system   kube-proxy-7b89c                         0/1       CrashLoopBackOff    15         15h       192.168.2.106   edge-1
kube-system   kube-proxy-qg69g                         1/1       Running             1          15h       10.0.2.15       osboxes
kube-system   kube-scheduler-ip-172-31-28-6            1/1       Running             0          5m        172.31.28.6     ip-172-31-28-6
kube-system   weave-net-pqxfp                          1/2       CrashLoopBackOff    189        15h       172.31.28.6     ip-172-31-28-6
kube-system   weave-net-thhzr                          1/2       CrashLoopBackOff    12         36m       192.168.2.106   edge-1
kube-system   weave-net-v69hj                          2/2       Running             7          15h       10.0.2.15       osboxes

$kubectl-n kube-system日志--v=7 kube-dns-6f4fd4bdf-w6ctf-c kubedns:

...
I0321 09:04:25.620580   23936 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/kube-dns-6f4fd4bdf-w6ctf/log?container=kubedns
I0321 09:04:25.620605   23936 round_trippers.go:421] Request Headers:
I0321 09:04:25.620611   23936 round_trippers.go:424]     Accept: application/json, */*
I0321 09:04:25.620616   23936 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:04:25.713821   23936 round_trippers.go:439] Response Status: 400 Bad Request in 93 milliseconds
I0321 09:04:25.714106   23936 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "container \"kubedns\" in pod \"kube-dns-6f4fd4bdf-w6ctf\" is waiting to start: ContainerCreating",
  "reason": "BadRequest",
  "code": 400
}]
F0321 09:04:25.714134   23936 helpers.go:119] Error from server (BadRequest): container "kubedns" in pod "kube-dns-6f4fd4bdf-w6ctf" is waiting to start: ContainerCreating

kubectl-n kube-system日志--v=7 kube-proxy-7b89c:

...
I0321 09:06:51.803852   24289 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/kube-proxy-7b89c/log
I0321 09:06:51.803879   24289 round_trippers.go:421] Request Headers:
I0321 09:06:51.803891   24289 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:06:51.803900   24289 round_trippers.go:424]     Accept: application/json, */*
I0321 09:08:59.110869   24289 round_trippers.go:439] Response Status: 500 Internal Server Error in 127306 milliseconds
I0321 09:08:59.111129   24289 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out",
  "code": 500
}]
F0321 09:08:59.111156   24289 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/kube-proxy-7b89c/kube-proxy: dial tcp 192.168.2.106:10250: getsockopt: connection timed out

Kubectl-n kube-system日志--v=7 weave-net-pqxfp-c weave:

...
I0321 09:12:08.047206   24847 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/weave-net-pqxfp/log?container=weave
I0321 09:12:08.047233   24847 round_trippers.go:421] Request Headers:
I0321 09:12:08.047335   24847 round_trippers.go:424]     Accept: application/json, */*
I0321 09:12:08.047347   24847 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:12:08.062494   24847 round_trippers.go:439] Response Status: 200 OK in 15 milliseconds
DEBU: 2018/03/21 09:11:26.847013 [kube-peers] Checking peer "fa:10:a4:97:7e:7b" against list &{[{6e:fd:f4:ef:1e:f5 osboxes}]}
Peer not in list; removing persisted data
INFO: 2018/03/21 09:11:26.880946 Command line options: map[expect-npc:true ipalloc-init:consensus=3 db-prefix:/weavedb/weave-net http-addr:127.0.0.1:6784 ipalloc-range:10.32.0.0/12 nickname:ip-172-31-28-6 host-root:/host name:fa:10:a4:97:7e:7b no-dns:true status-addr:0.0.0.0:6782 datapath:datapath docker-api: port:6783 conn-limit:30]
INFO: 2018/03/21 09:11:26.880995 weave  2.2.1
FATA: 2018/03/21 09:11:26.881117 Inconsistent bridge state detected. Please do 'weave reset' and try again

Kubectl-n kube-system日志--v=7 weave-net-thhzr-c weave:

...
I0321 09:15:13.787905   25113 round_trippers.go:414] GET https://<PUBLIC_IP>:6443/api/v1/namespaces/kube-system/pods/weave-net-thhzr/log?container=weave
I0321 09:15:13.787932   25113 round_trippers.go:421] Request Headers:
I0321 09:15:13.787938   25113 round_trippers.go:424]     Accept: application/json, */*
I0321 09:15:13.787946   25113 round_trippers.go:424]     User-Agent: kubectl/v1.9.4 (linux/amd64) kubernetes/bee2d15
I0321 09:17:21.126863   25113 round_trippers.go:439] Response Status: 500 Internal Server Error in 127338 milliseconds
I0321 09:17:21.127140   25113 helpers.go:201] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out",
  "code": 500
}]
F0321 09:17:21.127167   25113 helpers.go:119] Error from server: Get https://192.168.2.106:10250/containerLogs/kube-system/weave-net-thhzr/weave: dial tcp 192.168.2.106:10250: getsockopt: connection timed out

$ifconfig(AWS上的Kubernetes master):

datapath  Link encap:Ethernet  HWaddr ae:90:9a:b2:7e:d9
          inet6 addr: fe80::ac90:9aff:feb2:7ed9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:29 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:1904 (1.9 KB)  TX bytes:1188 (1.1 KB)

docker0   Link encap:Ethernet  HWaddr 02:42:50:39:1f:c7
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 06:a3:d0:8e:19:72
          inet addr:172.31.28.6  Bcast:172.31.31.255  Mask:255.255.240.0
          inet6 addr: fe80::4a3:d0ff:fe8e:1972/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:10323322 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9418208 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3652314289 (3.6 GB)  TX bytes:3117288442 (3.1 GB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:11388236 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11388236 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:2687297929 (2.6 GB)  TX bytes:2687297929 (2.6 GB)

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:10.8.0.1  P-t-P:10.8.0.2  Mask:255.255.255.255
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:97222 errors:0 dropped:0 overruns:0 frame:0
          TX packets:164607 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:13381022 (13.3 MB)  TX bytes:209129403 (209.1 MB)

vethwe-bridge Link encap:Ethernet  HWaddr 12:59:54:73:0f:91
          inet6 addr: fe80::1059:54ff:fe73:f91/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1476 (1.4 KB)  TX bytes:2940 (2.9 KB)

vethwe-datapath Link encap:Ethernet  HWaddr 8e:75:1c:92:93:0d
          inet6 addr: fe80::8c75:1cff:fe92:930d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:36 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2940 (2.9 KB)  TX bytes:1476 (1.4 KB)

vxlan-6784 Link encap:Ethernet  HWaddr a6:02:da:5e:d5:2a
          inet6 addr: fe80::a402:daff:fe5e:d52a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:65485  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$sudo systemctl状态kubelet.service(在AWS上):

Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: W0321 09:34:59.202058   19676 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 21 09:34:59 ip-172-31-28-6 kubelet[19676]: E0321 09:34:59.202452   19676 kubelet.go:2109] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.535541   19676 kuberuntime_manager.go:514] Container {Name:weave Image:weaveworks/weave-kube:2.2.1 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:weavedb ReadOnly:false MountPath:/weavedb SubPath: MountPropagation:<nil>} {Name:cni-bin ReadOnly:false MountPath:/host/opt SubPath: MountPropagation:<nil>} {Name:cni-bin2 ReadOnly:false MountPath:/host/home SubPath: MountPropagation:<nil>} {Name:cni-conf ReadOnly:false MountPath:/host/etc SubPath: MountPropagation:<nil>} {Name:dbus ReadOnly:false MountPath:/host/var/lib/dbus SubPath: MountPropagation:<nil>} {Name:lib-modules ReadOnly:false MountPath:/lib/modules SubPath: MountPropagation:<nil>} {Name:weave-net-token-vn8rh ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536504   19676 kuberuntime_manager.go:758] checking backoff for container "weave" in pod "weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: I0321 09:35:01.536636   19676 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)
Mar 21 09:35:01 ip-172-31-28-6 kubelet[19676]: E0321 09:35:01.536664   19676 pod_workers.go:186] Error syncing pod c6450070-2c61-11e8-a50d-06a3d08e1972 ("weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=weave pod=weave-net-pqxfp_kube-system(c6450070-2c61-11e8-a50d-06a3d08e1972)"

$sudo systemctl status kubelet.service(在笔记本电脑上)

Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.662670     715 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663412     715 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.663869     715 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 21 05:47:18 osboxes kubelet[715]: E0321 05:47:18.664295     715 pod_workers.go:186] Error syncing pod 11886465-2c61-11e8-a50d-06a3d08e1972 ("kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "CreatePodSandbox" for "kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-6f4fd4bdf-w6ctf_kube-system(11886465-2c61-11e8-a50d-06a3d08e1972)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Mar 21 05:47:20 osboxes kubelet[715]: W0321 05:47:20.536161     715 pod_container_deletor.go:77] Container "bbf490835face43b70c24dbcb67c3f75872e7831b5e2605dc8bb71210910e273" not found in pod's containers

$sudo systemctl status Kubelet.Service(在Raspberry Pi上):

Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.188199     339 kuberuntime_manager.go:514] Container {Name:kube-proxy Image:gcr.io/google_containers/kube-proxy-amd64:v1.9.5 Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:kube-proxy ReadOnly:false MountPath:/var/lib/kube-proxy SubPath: MountPropagation:<nil>} {Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:<nil>} {Name:lib-modules ReadOnly:true MountPath:/lib/modules SubPath: MountPropagation:<nil>} {Name:kube-proxy-token-px7dt ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.189023     339 kuberuntime_manager.go:758] checking backoff for container "kube-proxy" in pod "kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"
Mar 21 09:29:01 edge-1 kubelet[339]: I0321 09:29:01.190174     339 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)
Mar 21 09:29:01 edge-1 kubelet[339]: E0321 09:29:01.190518     339 pod_workers.go:186] Error syncing pod 5bebafa1-2c61-11e8-a50d-06a3d08e1972 ("kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"), skipping: failed to "StartContainer" for "kube-proxy" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-7b89c_kube-system(5bebafa1-2c61-11e8-a50d-06a3d08e1972)"
Mar 21 09:29:02 edge-1 kubelet[339]: W0321 09:29:02.278342     339 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 21 09:29:02 edge-1 kubelet[339]: E0321 09:29:02.282534     339 kubelet.go:2120] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

共有1个答案

松高歌
2023-03-14

您在Kubernetes master和节点之间的网络连接上肯定有问题。

但是,首先,这不是创建这种混合安装的最佳主意。您必须在主节点和节点之间有稳定的网络,否则会导致许多问题。但这是很难实现使用互联网连接。

如果要准备混合安装,可以在AWS中的Kubernetes集群和本地硬件上使用联邦。

但是,关于您的问题,我发现您在主节点和edge-1节点上织网有问题。从日志中不清楚您有哪种问题,请尝试使用weave_debug=1环境变量运行Weave容器。如果没有联网,其他吊舱(如kube-dns)将无法正常工作。

另外,您是如何设置OpenVPN的。您必须在AWS上的子网和客户端到客户端之间进行路由。因此,您在所有节点上用于设置集群的所有地址必须在彼此之间进行路由。再检查一次,将Kubernetes组件绑定到哪个地址并编织,这些地址是可路由的。

 类似资料:
  • 我对库伯内特斯集群管理非常陌生。我已经完成了库伯内特斯集群设置,有2个节点启动并运行。所以为了开始,我决定在集群上运行NodeBB应用程序。所以我没有构建自己的映像,而是在Docker HUb上获得了官方的NodeBB docker容器 所以为了从那个容器启动NodeBB,我向库伯内特斯master发射了命令。 等待一段时间后,它启动了具有两个副本的容器。但问题是它不断重新启动我的容器。 知道为什

  • 我试图找到这个问题的答案,但在kubernetes文档或任何问答论坛中都找不到。 我有一个运行有4个节点的kubernetes集群。是否可以创建第二个集群,重用前一个集群中的一个或多个节点?或者一个节点被限制在单个kubernetes集群中? 我正在使用RKE(用于部署k8集群的牧场工具)运行实际的集群,我发现这个问题让我怀疑这种可能性。 感谢您的澄清。

  • 我有一个现有的Azure VNET,它有一个站点到站点的VPN网关,可以在Premise资源上使用。这工作良好,VNET中的VM可以访问内部资源,也可以暴露于Internet。 我已经在VNET中创建了一个Kubernetes集群,并部署了一些通过LoadBalancer公开的Pod。 除了VPN网关之外,VNET还与另一个VNET进行对等,如果这与VPN网关有任何关系的话。

  • 我有kubernetes集群,有3个主人和7个工人。我用印花布做CNI。当我部署Calico时,calico-kube-controllers-xxx失败,因为它不能达到10.96.0.1:443。 这是kube-system名称空间中的情况: 集群pod cidr为192.168.0.0/16。

  • 我按照此处找到的指南设置了一个 4 节点 Kubernetes 集群:https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/ 它有一个主节点和3个工作节点。 我正在运行一个名为“hello world”的部署,它基于bashofmann/rancher演示映像,有20个副本。我还创建了一个名为hello world的nod

  • 网络节点 服务基本没变动,除了 L3 服务需要配置为 dvr_snat 模式。 命名空间上会多一个专门的 snat-xxx 命名空间,处理来自计算节点的无 floating IP 的南北向流量。