当前位置: 首页 > 知识库问答 >
问题:

库伯内特斯:无法访问法兰绒吊舱

冉伯寅
2023-03-14

我是库伯内特斯的新手。我已经在Oracle虚拟盒管理器上设置了3个Ubuntu 20.04.2 LTS虚拟机。

根据以下文档,我已经在所有3个虚拟机中安装了docker、kubelet、kubeadm和kubectl
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

我使用以下链接创建了集群:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

我使用以下命令来设置法兰绒

$ wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
$ kubectl create -f kube-flannel.yml

一切看起来都很好。

root@master-node:~/k8s# kubectl get nodes -o wide
NAME          STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
master-node   Ready    control-plane,master   23h   v1.20.5   192.168.108.10   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   docker://19.3.15
node-1        Ready    <none>                 10h   v1.20.5   192.168.108.11   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   docker://19.3.15
node-2        Ready    <none>                 10h   v1.20.5   192.168.108.12   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   docker://19.3.15

然后,我用3个副本创建nginx部署。

root@master-node:~/k8s# kubectl get po -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
dnsutils                        1/1     Running   2          127m   10.244.2.8   node-2   <none>           <none>
nginx-deploy-7848d4b86f-4nvg7   1/1     Running   0          9m8s   10.244.2.9   node-2   <none>           <none>
nginx-deploy-7848d4b86f-prj7g   1/1     Running   0          9m8s   10.244.1.9   node-1   <none>           <none>
nginx-deploy-7848d4b86f-r95hq   1/1     Running   0          9m8s   10.244.1.8   node-1   <none>           <none>

只有当我试图卷曲nginx pod时,问题才会出现。它没有反应。

root@master-node:~/k8s# curl 10.244.2.9
^C

然后我登录到pod并确认nginx已启动。

root@master-node:~/k8s# kubectl exec -it nginx-deploy-7848d4b86f-4nvg7  -- /bin/bash
root@nginx-deploy-7848d4b86f-4nvg7:/# curl 127.0.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@nginx-deploy-7848d4b86f-4nvg7:/# exit
exit

以下是库贝特描述 pod 在其中一个 pod 上的结果:

root@master-node:~/k8s# kubectl describe pod nginx-deploy-7848d4b86f-4nvg7
Name:         nginx-deploy-7848d4b86f-4nvg7
Namespace:    default
Priority:     0
Node:         node-2/192.168.108.12
Start Time:   Sun, 28 Mar 2021 04:49:15 +0000
Labels:       app=nginx
              pod-template-hash=7848d4b86f
Annotations:  <none>
Status:       Running
IP:           10.244.2.9
IPs:
  IP:           10.244.2.9
Controlled By:  ReplicaSet/nginx-deploy-7848d4b86f
Containers:
  nginx:
    Container ID:   docker://f6322e65cb98e54cc220a786ffb7c967bbc07d80fe8d118a19891678109680d8
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b0ea179ab61c789ce759dbe491cc534e293428ad232d00df83ce44bf86261179
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 28 Mar 2021 04:49:19 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhkzx (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-xhkzx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xhkzx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned default/nginx-deploy-7848d4b86f-4nvg7 to node-2
  Normal  Pulling    25m   kubelet            Pulling image "nginx"
  Normal  Pulled     25m   kubelet            Successfully pulled image "nginx" in 1.888247052s
  Normal  Created    25m   kubelet            Created container nginx
  Normal  Started    25m   kubelet            Started container nginx

我试图通过使用:调试Kubernetes网络来排除故障

root@master-node:~/k8s# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:db:6f:21 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:90:88:7c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:1d:21:66:20 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
    link/ether 4a:df:fb:be:7b:0e brd ff:ff:ff:ff:ff:ff
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether 02:48:db:46:53:60 brd ff:ff:ff:ff:ff:ff
7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:29:13:98:2c:31 brd ff:ff:ff:ff:ff:ff
8: vethc2e0fa86@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
    link/ether 7a:66:b0:97:db:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth3eb514e1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
    link/ether 3e:3c:9d:20:5c:42 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:35:f0:fb:e3:b1 brd ff:ff:ff:ff:ff:ff link-netns test1
root@master-node:~/k8s# kubectl create -f nwtool-deployment.yaml
deployment.apps/nwtool-deploy created
root@master-node:~/k8s# kubectl get po
NAME                             READY   STATUS    RESTARTS   AGE
nwtool-deploy-6d8c99644b-fq6gv   1/1     Running   0          14s
nwtool-deploy-6d8c99644b-fwc6d   1/1     Running   0          14s
root@master-node:~/k8s# ^C
root@master-node:~/k8s# kubectl exec -it nwtool-deploy-6d8c99644b-fq6gv -- ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default
    link/ether 2e:02:b6:97:2f:10 brd ff:ff:ff:ff:ff:ff
root@master-node:~/k8s# kubectl exec -it nwtool-deploy-6d8c99644b-fwc6d -- ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default
    link/ether 82:21:fa:aa:34:27 brd ff:ff:ff:ff:ff:ff
root@master-node:~/k8s# ip link list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:db:6f:21 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:90:88:7c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:1d:21:66:20 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
    link/ether 4a:df:fb:be:7b:0e brd ff:ff:ff:ff:ff:ff
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether 02:48:db:46:53:60 brd ff:ff:ff:ff:ff:ff
7: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:29:13:98:2c:31 brd ff:ff:ff:ff:ff:ff
8: vethc2e0fa86@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
    link/ether 7a:66:b0:97:db:81 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth3eb514e1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default
    link/ether 3e:3c:9d:20:5c:42 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: veth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 02:35:f0:fb:e3:b1 brd ff:ff:ff:ff:ff:ff link-netns test1
root@master-node:~/k8s#

看起来没有为主节点上的新 Pod 创建任何对。任何想法如何解决这个问题?任何帮助将不胜感激。谢谢!

共有1个答案

陆光济
2023-03-14

我已经找到了问题所在。感谢:Kubernetes with法兰绒——了解网络——第1部分(设置演示),我复制了以下帮助解决我的问题的摘录:

虚拟机将创建2个接口。并且,在运行flannel时,您需要正确提及接口名称。没有它,您可能会看到pod会出现并获取IP地址,但无法相互通信。

您需要在flannel清单文件中指定接口名称enp0s8。

vagrant@master:~$ grep -A8 containers kube-flannel.yml
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=enp0s8          ####Add the iface name here.

如果您碰巧有不同的接口要匹配,您可以在regex模式上匹配它。假设工作节点可能配置了enp0s8或enp0s9,那么法兰绒参数应该是— - iface-regex=[enp0s8|enp0s9]

 类似资料:
  • 首先,我开始库伯内特斯使用法兰绒与。 然后我重置所有并使用重新启动。 但是,接口 仍然是 这就是我清理的方式: 我在重置中遗漏了什么吗?

  • 我在aws上创建了一个简单的EKS集群,如https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/eks-getting-started.所述 我在这个集群中创建了一个nginx部署和一个Loadbalancer类型的服务,如下所述。该配置在minikube上本地工作。 在AWS上,我可以

  • 我在GKE上设置了Istio(Kubernetes入口模式,而不是Istio网关)。但是,我无法使用curl从外部访问 istio ingressgateway LoadBalancer 10.48.11.240 35.222.111.100 15020:30115/TCP,80:31420/TCP,443:32019/TCP,31400:31267/TCP,15029:30180/TCP,150

  • 我有一个朋友允许我访问他的kube集群(托管在IBM云上)。 我可以通过IBM云控制台登录 但是,当我试图通过kubectl访问它们时:kubectl get节点 结果显示一条错误消息: 服务器错误(禁止):节点被禁止:用户https://iam.ng.bluemix.net/kubernetes#无法在群集范围内列出节点。 为什么控制台和CLI之间的访问(RBAC)会有所不同?

  • 我们有一个由几个节点组成的集群,所以我不能做节点端口,只需转到我的节点ip(这是我为测试普罗米修斯所做的)。我在“监控”名称空间中安装了stable/prometheus和stable/grafana。 到目前为止一切看起来都很好。 这是服务elb。yaml: 顺便说一句,如果我不使用--set rbac创建图表,那么在serviceaccount的权限方面会出现错误。创建=false 我最近为K

  • 我正在VM中运行一个单节点Kubernetes集群,用于开发和测试。我使用Rancher Kubernetes引擎(RKE,Kubernetes版本1.18)部署它,并使用MetalLB启用LoadBalancer服务类型。Traefik是2.2版,通过官方掌舵图部署(https://github.com/containous/traefik-helm-chart)。我部署了几个虚拟容器来测试设置