当前位置: 首页 > 知识库问答 >
问题:

访问Kubernetes clusterIP服务时请求超时

邹野
2023-03-14

我正在寻求帮助,以解决这个基本方案无法正常工作的问题:

在运行在MacBook上的VirtualBox VM上安装了kubeadm的三个节点:

sudo kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
kubernetes-master   Ready     master    4h        v1.10.2
kubernetes-node1    Ready     <none>    4h        v1.10.2
kubernetes-node2    Ready     <none>    34m       v1.10.2

Virtualbox VM有2个适配器:1)主机专用2)NAT。来自客户html" target="_blank">计算机的节点IP是:

kubernetes-master (192.168.56.3)
kubernetes-node1  (192.168.56.4)
kubernetes-node2  (192.168.56.5)
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.3
nginx-deployment-64ff85b579-sk5zs   1/1       Running   0          14m       10.244.2.2   kubernetes-node2
nginx-deployment-64ff85b579-sqjgb   1/1       Running   0          14m       10.244.1.2   kubernetes-node1

我将它们公开为ClusterIP服务:

sudo kubectl get services 
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP   22m
nginx-deployment   ClusterIP   10.98.206.211   <none>        80/TCP    14m

现在问题来了:

我ssh到kubernetes-node1并使用集群IP卷曲服务:

ssh 192.168.56.4
---
curl 10.98.206.211

如果我ssh到kubernetes-node2,所有卷发都很好。当在kubernetes-node1中时,所有请求都超时。

更新2:

kubernetes-master ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::20a0:c7ff:fe6f:8271  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:00:01  txqueuelen 1000  (Ethernet)
        RX packets 10478  bytes 2415081 (2.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11523  bytes 2630866 (2.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:cd:ce:84:a9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.3  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe2d:298f  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:2d:29:8f  txqueuelen 1000  (Ethernet)
        RX packets 20784  bytes 2149991 (2.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26567  bytes 26397855 (26.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:fe09:f08a  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:09:f0:8a  txqueuelen 1000  (Ethernet)
        RX packets 12662  bytes 12491693 (12.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4507  bytes 297572 (297.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c078:65ff:feb9:e4ed  prefixlen 64  scopeid 0x20<link>
        ether c2:78:65:b9:e4:ed  txqueuelen 0  (Ethernet)
        RX packets 6  bytes 444 (444.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 444 (444.0 B)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 464615  bytes 130013389 (130.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 464615  bytes 130013389 (130.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethb1098eb3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::d8a3:a2ff:fedf:4d1d  prefixlen 64  scopeid 0x20<link>
        ether da:a3:a2:df:4d:1d  txqueuelen 0  (Ethernet)
        RX packets 10478  bytes 2561773 (2.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11538  bytes 2631964 (2.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::5cab:32ff:fe04:5b89  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:01:01  txqueuelen 1000  (Ethernet)
        RX packets 199  bytes 41004 (41.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 331  bytes 56438 (56.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:0f:02:bb:ff  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.4  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe36:741a  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:36:74:1a  txqueuelen 1000  (Ethernet)
        RX packets 12834  bytes 9685221 (9.6 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9114  bytes 1014758 (1.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:feb2:23a3  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:b2:23:a3  txqueuelen 1000  (Ethernet)
        RX packets 13263  bytes 12557808 (12.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5065  bytes 341321 (341.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::7815:efff:fed6:1423  prefixlen 64  scopeid 0x20<link>
        ether 7a:15:ef:d6:14:23  txqueuelen 0  (Ethernet)
        RX packets 483  bytes 37506 (37.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 483  bytes 37506 (37.5 KB)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 3072  bytes 269588 (269.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3072  bytes 269588 (269.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth153293ec: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::70b6:beff:fe94:9942  prefixlen 64  scopeid 0x20<link>
        ether 72:b6:be:94:99:42  txqueuelen 0  (Ethernet)
        RX packets 81  bytes 19066 (19.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 129  bytes 10066 (10.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::4428:f5ff:fe8b:a76b  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:02:01  txqueuelen 1000  (Ethernet)
        RX packets 184  bytes 36782 (36.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 284  bytes 36940 (36.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:7f:e9:79:cd  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.5  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:feb7:ff54  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:b7:ff:54  txqueuelen 1000  (Ethernet)
        RX packets 12634  bytes 9466460 (9.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8961  bytes 979807 (979.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:fed8:9210  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:d8:92:10  txqueuelen 1000  (Ethernet)
        RX packets 12658  bytes 12491919 (12.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4544  bytes 297215 (297.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.2.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c832:e4ff:fe3e:f616  prefixlen 64  scopeid 0x20<link>
        ether ca:32:e4:3e:f6:16  txqueuelen 0  (Ethernet)
        RX packets 111  bytes 8466 (8.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 111  bytes 8466 (8.4 KB)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2940  bytes 258968 (258.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2940  bytes 258968 (258.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kubernetes-node2 kubelet日志

IP路由

主人

kubernetes-master:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.3 
kubernetes-node1:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.4 
kubernetes-node2:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.5

共有1个答案

商正浩
2023-03-14

我在使用法兰绒的K8s集群时遇到了类似的问题。我为vms设置了一个用于internet连接的NAT网卡和一个用于节点到节点通信的主机专用网卡。Flannel默认选择NAT nic进行节点间通信,这在这种情况下显然不起作用。

在部署之前,我修改了flannel清单,将--iface=enp0s8参数设置为本应选择的主机专用网卡(在本例中为enp0s8)。在您的例子中,看起来enp0s3将是正确的网卡。此后,节点到节点的通信工作良好。

我没有注意到我还修改了kube-proxy清单,以包括--cluster-cidr=10.244.0.0/16和--proxy-mode=iptables,这似乎也是必需的。

 类似资料:
  • 我在heroku服务器上部署了nodejs代码。 我面临H12请求超时问题 这是与pgsql的db连接,它是异步的 这是我如何调用这个数据库 以及立面功能 同样的编码在本地工作,它也在heroku工作。但是突然间我得到了这个问题。有什么帮助吗?

  • 我们在Spring WebFlux中使用2.4.2中的Spring Boot。 我希望Spring Boot应用程序终止对应用程序的所有请求,这些请求需要超过3秒的处理时间。 有,但这似乎并不能解决问题。 有没有办法指定这样的服务器请求超时?

  • 我已经安装了selenium IDE和selenium server(selenium RC)。。。当使用phpunit在命令提示符下运行测试用例时,会出现如下错误 访问位于“”的Selenium服务器时,响应无效http://localhost:4444/selenium-服务器/驱动程序/':30000ms后超时 我在selenium IDE选项超时属性中进行了更改,并且在启动服务器时 jav

  • 当没有响应返回时,我需要超时我的RESTAPI。例如,当我发出请求时,如果返回响应需要5秒钟以上,则超时。 为了实现这一点,我选择了使用Hystrix的断路器。但我不确定这是否是解决此问题的正确方法,如果一切都好,是否应该对Hystrix进行更多配置? 我当前的实现看起来像, 主类 RestController 应用程序属性 当超时时,它抛出一个HystrixRunTimeException,并由

  • Middleware: 请求超时 请求超时控制也是不可或缺的中间件: <?php class RequestTimeout implements Middleware { public $timeout; public $exception; private $timerId; public function __construct($timeout, \Ex

  • 来个大佬看一下,有没有好办法。ε=(´ο`*))) 我在IDEA里面用java请求 https://fapi.binance.com/fapi/v1/ticker/price?symbol=BTCUSDT 这个地址,无论我用httpsURLConnection,还是OkHttp,还是hutool,统统连接超时,浏览器可以访问这个地址,python也可以请求成功。就java不行。 查了查有说是要加这