我运行的是vanilla EKS Kubernetes版本1.12。
我已经使用CNI Genie允许自定义选择的CNI,豆荚使用时启动,我已经安装了标准的Calico CNI设置。
但是,如果一个吊舱移动到不同的工作节点,那么它们之间的网络在集群内就不能工作。
我检查了calico auto配置的worker节点上的路由表,在我看来这是合乎逻辑的。
下面是我在所有名称空间中的广泛的pod列表:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default hello-node1-865588ccd7-64p5x 1/1 Running 0 31m 192.168.106.129 ip-10-0-2-31.eu-west-2.compute.internal <none>
default hello-node2-dc7bbcb74-gqpwq 1/1 Running 0 17m 192.168.25.193 ip-10-0-3-222.eu-west-2.compute.internal <none>
kube-system aws-node-cm2dp 1/1 Running 0 26m 10.0.3.222 ip-10-0-3-222.eu-west-2.compute.internal <none>
kube-system aws-node-vvvww 1/1 Running 0 31m 10.0.2.31 ip-10-0-2-31.eu-west-2.compute.internal <none>
kube-system calico-kube-controllers-56bfccb786-fc2j4 1/1 Running 0 30m 10.0.2.41 ip-10-0-2-31.eu-west-2.compute.internal <none>
kube-system calico-node-flmnl 1/1 Running 0 31m 10.0.2.31 ip-10-0-2-31.eu-west-2.compute.internal <none>
kube-system calico-node-hcmqd 1/1 Running 0 26m 10.0.3.222 ip-10-0-3-222.eu-west-2.compute.internal <none>
kube-system coredns-6c64c9f456-g2h9k 1/1 Running 0 30m 10.0.2.204 ip-10-0-2-31.eu-west-2.compute.internal <none>
kube-system coredns-6c64c9f456-g5lhl 1/1 Running 0 30m 10.0.2.200 ip-10-0-2-31.eu-west-2.compute.internal <none>
kube-system genie-plugin-hspts 1/1 Running 0 26m 10.0.3.222 ip-10-0-3-222.eu-west-2.compute.internal <none>
kube-system genie-plugin-vqd2d 1/1 Running 0 31m 10.0.2.31 ip-10-0-2-31.eu-west-2.compute.internal <none>
kube-system kube-proxy-jm7f7 1/1 Running 0 26m 10.0.3.222 ip-10-0-3-222.eu-west-2.compute.internal <none>
kube-system kube-proxy-nnp76 1/1 Running 0 31m 10.0.2.31 ip-10-0-2-31.eu-west-2.compute.internal <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node1 ClusterIP 172.20.90.83 <none> 8081/TCP 43m
hello-node2 ClusterIP 172.20.242.22 <none> 8082/TCP 43m
事情停止工作时,我使用印花布CNI如上。
我在这个测试集群中只有两个EKS工作主机。以下是每一个的路由:
K8s工人1路由
[ec2-user@ip-10-0-3-222 ~]$ ip route
default via 10.0.3.1 dev eth0
10.0.3.0/24 dev eth0 proto kernel scope link src 10.0.3.222
169.254.169.254 dev eth0
blackhole 192.168.25.192/26 proto bird
192.168.25.193 dev calia0da7d91dc2 scope link
192.168.106.128/26 via 10.0.2.31 dev tunl0 proto bird onlink
[ec2-user@ip-10-0-2-31 ~]$ ip route
default via 10.0.2.1 dev eth0
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.31
10.0.2.41 dev enif4cf9019f11 scope link
10.0.2.200 dev eni412af1a0e55 scope link
10.0.2.204 dev eni04260ebbbe1 scope link
169.254.169.254 dev eth0
192.168.25.192/26 via 10.0.3.222 dev tunl0 proto bird onlink
blackhole 192.168.106.128/26 proto bird
192.168.106.129 dev cali19da7817849 scope link
curl http://hello-node2:8082
(或ping hello-node2 pod的calico分配的IP地址。编辑
为了进一步测试这一点,我在运行hello-node2 pod的主机上运行了tcpdump,捕获端口8080(容器监听此端口)。
[ec2-user@ip-10-0-3-222 ~]$ sudo tcpdump -vv -x -X -i tunl0 'port 8080'
tcpdump: listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes
14:32:42.859238 IP (tos 0x0, ttl 254, id 63813, offset 0, flags [DF], proto TCP (6), length 60)
10.0.2.31.29192 > 192.168.25.193.webcache: Flags [S], cksum 0xf932 (correct), seq 3206263598, win 28000, options [mss 1400,sackOK,TS val 2836614698 ecr 0,nop,wscale 7], length 0
0x0000: 4500 003c f945 4000 fe06 9ced 0a00 021f E..<.E@.........
0x0010: c0a8 19c1 7208 1f90 bf1b b32e 0000 0000 ....r...........
0x0020: a002 6d60 f932 0000 0204 0578 0402 080a ..m`.2.....x....
0x0030: a913 4e2a 0000 0000 0103 0307 ..N*........
14:32:43.870168 IP (tos 0x0, ttl 254, id 63814, offset 0, flags [DF], proto TCP (6), length 60)
10.0.2.31.29192 > 192.168.25.193.webcache: Flags [S], cksum 0xf53f (correct), seq 3206263598, win 28000, options [mss 1400,sackOK,TS val 2836615709 ecr 0,nop,wscale 7], length 0
0x0000: 4500 003c f946 4000 fe06 9cec 0a00 021f E..<.F@.........
0x0010: c0a8 19c1 7208 1f90 bf1b b32e 0000 0000 ....r...........
0x0020: a002 6d60 f53f 0000 0204 0578 0402 080a ..m`.?.....x....
0x0030: a913 521d 0000 0000 0103 0307 ..R.........
^C
2 packets captured
2 packets received by filter
0 packets dropped by kernel
[ec2-user@ip-10-0-3-222 ~]$ ifconfig
calia0da7d91dc2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet6 fe80::ecee:eeff:feee:eeee prefixlen 64 scopeid 0x20<link>
ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
RX packets 84 bytes 5088 (4.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
是什么阻止了主机之间的网络工作?我是不是漏掉了一些明显的东西?
编辑2-Arjun Pandey的信息-parjun8840
以下是关于我的印花布配置的更多信息:
注意:workloadendpoints似乎不存在...
me@mine ~ aws-vault exec my-vault-entry -- kubectl get IPPool --all-namespaces
NAME AGE
default-ipv4-ippool 1d
me@mine ~ aws-vault exec my-vault-entry -- kubectl get IPPool default-ipv4-ippool -o yaml
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
annotations:
projectcalico.org/metadata: '{"uid":"41bd2c82-d576-11e9-b1ef-121f3d7b4d4e","creationTimestamp":"2019-09-12T15:59:09Z"}'
creationTimestamp: "2019-09-12T15:59:09Z"
generation: 1
name: default-ipv4-ippool
resourceVersion: "500448"
selfLink: /apis/crd.projectcalico.org/v1/ippools/default-ipv4-ippool
uid: 41bd2c82-d576-11e9-b1ef-121f3d7b4d4e
spec:
blockSize: 26
cidr: 192.168.0.0/16
ipipMode: CrossSubnet
natOutgoing: true
nodeSelector: all()
vxlanMode: Never
me@mine ~ aws-vault exec my-vault-entry -- calicoctl get nodes
NAME
ip-10-254-109-184.ec2.internal
ip-10-254-109-237.ec2.internal
ip-10-254-111-147.ec2.internal
me@mine ~ aws-vault exec my-vault-entry -- calicoctl get workloadendpoints
WORKLOAD NODE NETWORKS INTERFACE
me@mine ~
主机IP A
[ec2-user@ip-10-254-109-184 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:1b:79:d1:c5:bc brd ff:ff:ff:ff:ff:ff
inet 10.254.109.184/26 brd 10.254.109.191 scope global dynamic eth0
valid_lft 2881sec preferred_lft 2881sec
inet6 fe80::1b:79ff:fed1:c5bc/64 scope link
valid_lft forever preferred_lft forever
3: eni808caba7453@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether c2:be:80:d4:6a:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::c0be:80ff:fed4:6af3/64 scope link
valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 192.168.29.128/32 brd 192.168.29.128 scope global tunl0
valid_lft forever preferred_lft forever
6: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:12:58:bb:c6:1a brd ff:ff:ff:ff:ff:ff
inet 10.254.109.137/26 brd 10.254.109.191 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::12:58ff:febb:c61a/64 scope link
valid_lft forever preferred_lft forever
7: enia6f1918d9e2@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 96:f5:36:53:e9:55 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::94f5:36ff:fe53:e955/64 scope link
valid_lft forever preferred_lft forever
8: enia32d23ac2d1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 36:5e:34:a7:82:30 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::345e:34ff:fea7:8230/64 scope link
valid_lft forever preferred_lft forever
9: cali5e7dde1e39e@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
[ec2-user@ip-10-254-109-184 ~]$
nsenter在测试容器pid上获取IP A
信息:
[ec2-user@ip-10-254-109-184 ~]$ sudo nsenter -t 15715 -n ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
link/ether 9a:6d:db:06:74:cb brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.29.129/32 scope global eth0
valid_lft forever preferred_lft forever
我现在不确定确切的解决方案(我没有在AWS上测试calico,通常我在AWS和物理集群calico上使用amazon-vpc-cni-k8s),但下面是我们可以查看的快速方法。
Calico AWS需求-https://docs.projectcalico.org/v2.3/reference/public-cloud/AWS
kubectl get IPPool --all-namespaces
NAME AGE
default-ipv4-ippool 15d
kubectl get IPPool default-ipv4-ippool -o yaml
~ calicoctl get nodes
NAME
node1
node2
node3
node4
~ calicoctl get workloadendpoints
NODE ORCHESTRATOR WORKLOAD NAME
node2 k8s default.myapp-569c54f85-xtktk eth0
node1 k8s kube-system.calico-kube-controllers-5cbcccc885-b9x8s eth0
node1 k8s kube-system.coredns-fb8b8dcde-2zpw8 eth0
node1 k8s kube-system.coredns-fb8b8dcfg-hc6zv eth0
另外,如果我们能得到容器网络的细节:nsenter-t、pid-n、ip、a
对于主机也是:ip a
我刚开始用几个Raspberry pi设备创建自己的Kubernetes集群。我用的是Alex Ellis的指南。但是我有一个问题,我的NodePort只能从实际运行容器的Pod中工作。因此没有从不运行容器的Pod进行重定向。 服务部署(&D kubectl get服务 在不同于主机的节点上和总是返回连接定时。 sudo iptables-保存(在230主节点上) sudo iptables-保存
我有两个应用程序——基于浏览器的客户端和基于NodeJS的服务器,它们都使用WebSockets进行通信(我使用的是ColyseusJS库)。问题是,当我在本地主机上测试它们时,一切都正常,但当我将应用程序部署到我的Ubuntu VPS服务器时,它停止工作。 我在尝试连接时在浏览器中收到的消息是: 所以它到达服务器(因为当服务器被禁用时,消息是“连接建立中的错误:净::ERR_CONNECTION
问题内容: 我已经进行了永久搜索,无法为我的问题提供确切的答案。就是这样。我有一个看起来像这样的JSON文件(我去了jsonlint进行验证,它说的很好): 我使用jQuery解析并使用此函数放置在html页面上: 它完美地工作!现在是我的问题,JSON文件将不在本地托管,并且实际上将托管在单独的域中。所以我修改了我的代码如下(经过阅读),希望它能正常工作: 通过添加“回调”行,我停止出现“无法加
我正在构建一个web推送通知系统,并使用本例中使用的概念: https://github.com/Minishlink/web-push-php-example 我的JS文件中有以下代码。它检查API支持,检查通知是否被禁用,注册服务人员,请求显示通知的权限,如果允许,订阅用户并将详细信息发送到服务器。如果用户已经订阅,它会更新数据库中的endpoint值。 当我在火狐61上运行时,它工作正常,但
但是在我使用运行应用程序的服务器上,环境变量仍然。 我用来在服务器上运行应用程序的pm2命令是:
我试过这里发布的其他解决方案,但没有任何效果。。 我有下面的代码。htaccess文件。 我收到一封电子邮件,通过点击url,我可以访问注册表。php 我开始登记表格。比如: 有人知道为什么这在本地主机上有效,而在远程服务器上无效吗?