当前位置: 首页 > 知识库问答 >
问题:

Kubernetes DNS在kube-DNS pod工作的地方工作,如果scale kubedns pod没有工作

孙宏扬
2023-03-14
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"87d9d8d7bc5aa35041a8ddfe3d4b367381112f89", GitTreeState:"clean", BuildDate:"2016-12-12T21:10:52Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

环境:

AWS, using VPC, all master and 2 nodes under same subnet
RHEL 7.2
Kernel (e.g. uname -a): Linux master.example.com 3.10.0-514.6.2.el7.x86_64 #1 SMP Fri Feb 17 19:21:31 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools: Install kubernetes as per Redhat guideline using flannel Network
flannel-config.json
{
"Network": "10.20.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}
Kubernetes Cluster Network : 10.254.0.0/16

其他:发生了什么:我们有以下设置的库伯内特集群设置

Master: ip-10-52-2-56.ap-northeast-2.compute.internal
Node1: ip-10-52-2-59.ap-northeast-2.compute.internal
Node2: ip-10-52-2-54.ap-northeast-2.compute.internal

主配置详细信息:

[root@master ~]# egrep -v '^#|^$' /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
[root@master ~]# egrep -v '^#|^$' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
[root@master ~]# egrep -v '^#|^$' /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--service_account_key_file=/serviceaccount.key""
[root@master ~]# egrep -v '^#|^$' /etc/sysconfig/flanneld
FLANNEL_ETCD="http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
FLANNEL_OPTIONS="eth0"

Node1/Node2 config details are same as follows:
[root@ip-10-52-2-59 ec2-user]# egrep -v '^$|^#' /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
[root@ip-10-52-2-59 ec2-user]# egrep -v '^#|^$' /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=ip-10-52-2-59.ap-northeast-2.compute.internal"
KUBELET_API_SERVER="--api-servers=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--cluster-dns=10.254.0.2 --cluster-domain=cluster.local"
[root@ip-10-52-2-59 ec2-user]# grep KUBE_PROXY_ARGS /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
[root@ip-10-52-2-59 ec2-user]# egrep -v '^#|^$' /etc/sysconfig/flanneld
FLANNEL_ETCD="http://ip-10-52-2-56.ap-northeast-2.compute.internal:2379"
FLANNEL_ETCD_KEY="/coreos.com/network"
FLANNEL_OPTIONS="eth0"

按以下配置运行kube dns:

apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:

name: dns
port: 53
protocol: UDP
name: dns-tcp
port: 53
protocol: TCP
apiVersion: v1
kind: ReplicationController
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v20
name: kube-dns-v20
namespace: kube-system
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v20
template:
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v20
spec:
containers:
-
args:
- "--domain=cluster.local"
- "--kube-master-url=http://ip-10-52-2-56.ap-northeast-2.compute.internal:8080"
- "--dns-port=10053"
image: "gcr.io/google_containers/kubedns-amd64:1.9"
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
-
containerPort: 10053
name: dns-local
protocol: UDP
-
containerPort: 10053
name: dns-tcp-local
protocol: TCP
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
-
args:
- "--cache-size=1000"
- "--no-resolv"
- "--server=127.0.0.1#10053"
image: "gcr.io/google_containers/kube-dnsmasq-amd64:1.4"
name: dnsmasq
ports:
-
containerPort: 53
name: dns
protocol: UDP
-
containerPort: 53
name: dns-tcp
protocol: TCP
-
args:
- "-cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null"
- "-port=8080"
- "-quiet"
image: "gcr.io/google_containers/exechealthz-amd64:1.2"
name: healthz
ports:
-
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default

发生了什么:Kubernetes DNS在kube-dns pod工作的地方工作,如果scale kubedns pod在任何地方(节点)都不工作。

在下面的中,一个dns pod正在节点1上运行,响应也来自节点1 busybox pod,但是节点2 busybox pod nslookup没有响应。

图像1

现在下面的两个dns pod正在node1和node2上运行,您可以看到两个节点的busybox pod都没有响应。

图像2

下面是一些其他的观察....

DNS pod大部分时间采用172.17 IP系列,如果我扩展超过4个pod,则在节点2 dns pod中采用10.20系列IP。

有趣的部分Node2 pods始于10.20系列IP。但Node1 pods始于172.17系列IP。

一些iptable保存两个节点的输出。

[root@ip-10-52-2-54 ec2-user]# iptables-save | grep DNAT
-A KUBE-SEP-3M72SO5X7J6X6TX6 -p tcp -m comment --comment "default/prometheus:prometheus" -m tcp -j DNAT --to-destination 172.17.0.8:9090
-A KUBE-SEP-7SLC3EUJVX23N2X4 -p tcp -m comment --comment "default/zookeeper:" -m tcp -j DNAT --to-destination 172.17.0.4:2181
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-EN24FH2N7PLAR6AW -p tcp -m comment --comment "default/kafkacluster:" -m tcp -j DNAT --to-destination 172.17.0.2:9092
-A KUBE-SEP-LCDAFU4UXQHVDQT6 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-LCDAFU4UXQHVDQT6 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.52.2.56:6443
-A KUBE-SEP-MX63IHIHS5ZB4347 -p tcp -m comment --comment "default/nodejs4promethus-scraping:" -m tcp -j DNAT --to-destination 172.17.0.6:3000
-A KUBE-SEP-NOI5B75N7ZJAIPJR -p tcp -m comment --comment "default/mongodb-prometheus-exporter:" -m tcp -j DNAT --to-destination 172.17.0.12:9001
-A KUBE-SEP-O6UDQQL3MHGYTSH5 -p tcp -m comment --comment "default/producer:" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-QO4SWWCV7NMMGPBN -p tcp -m comment --comment "default/kafka-prometheus-jmx:" -m tcp -j DNAT --to-destination 172.17.0.2:7071
-A KUBE-SEP-SVCEI2UVU246H7MW -p tcp -m comment --comment "default/mongodb:" -m tcp -j DNAT --to-destination 172.17.0.12:27017
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-ZXXWX3EF7T3W7UNY -p tcp -m comment --comment "default/grafana:" -m tcp -j DNAT --to-destination 172.17.0.9:3000

[root@ip-10-52-2-54 ec2-user]# iptables-save | grep 53
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SERVICES -d 10.254.0.2/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.254.0.2/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4

---------

[root@ip-10-52-2-59 ec2-user]# iptables-save | grep DNAT
-A KUBE-SEP-3M72SO5X7J6X6TX6 -p tcp -m comment --comment "default/prometheus:prometheus" -m tcp -j DNAT --to-destination 172.17.0.8:9090
-A KUBE-SEP-7SLC3EUJVX23N2X4 -p tcp -m comment --comment "default/zookeeper:" -m tcp -j DNAT --to-destination 172.17.0.4:2181
-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-EN24FH2N7PLAR6AW -p tcp -m comment --comment "default/kafkacluster:" -m tcp -j DNAT --to-destination 172.17.0.2:9092
-A KUBE-SEP-LCDAFU4UXQHVDQT6 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-LCDAFU4UXQHVDQT6 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.52.2.56:6443
-A KUBE-SEP-MX63IHIHS5ZB4347 -p tcp -m comment --comment "default/nodejs4promethus-scraping:" -m tcp -j DNAT --to-destination 172.17.0.6:3000
-A KUBE-SEP-NOI5B75N7ZJAIPJR -p tcp -m comment --comment "default/mongodb-prometheus-exporter:" -m tcp -j DNAT --to-destination 172.17.0.12:9001
-A KUBE-SEP-O6UDQQL3MHGYTSH5 -p tcp -m comment --comment "default/producer:" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-QO4SWWCV7NMMGPBN -p tcp -m comment --comment "default/kafka-prometheus-jmx:" -m tcp -j DNAT --to-destination 172.17.0.2:7071
-A KUBE-SEP-SVCEI2UVU246H7MW -p tcp -m comment --comment "default/mongodb:" -m tcp -j DNAT --to-destination 172.17.0.12:27017
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-ZXXWX3EF7T3W7UNY -p tcp -m comment --comment "default/grafana:" -m tcp -j DNAT --to-destination 172.17.0.9:3000

[root@ip-10-52-2-59 ec2-user]# iptables-save | grep 53

-A KUBE-SEP-D4NTKJJ3YXXGJARZ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SEP-Y4XH6F2KQCY7WQBG -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.10:53
-A KUBE-SERVICES -d 10.254.0.2/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.254.0.2/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
Restarted below serviced on both node

    for SERVICES in flanneld docker kube-proxy.service kubelet.service; do
    systemctl stop $SERVICES
    systemctl start $SERVICES
    done

Node1: ifconfig

    [root@ip-10-52-2-59 ec2-user]# ifconfig
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
            inet6 fe80::42:2dff:fe01:c0b0  prefixlen 64  scopeid 0x20<link>
            ether 02:42:2d:01:c0:b0  txqueuelen 0  (Ethernet)
            RX packets 1718522  bytes 154898857 (147.7 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1704874  bytes 2186333188 (2.0 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
            inet 10.52.2.59  netmask 255.255.255.224  broadcast 10.52.2.63
            inet6 fe80::91:9aff:fe7e:20a7  prefixlen 64  scopeid 0x20<link>
            ether 02:91:9a:7e:20:a7  txqueuelen 1000  (Ethernet)
            RX packets 2604083  bytes 2208387383 (2.0 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1974861  bytes 593497458 (566.0 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1  (Local Loopback)
            RX packets 80  bytes 7140 (6.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 80  bytes 7140 (6.9 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth01225a6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::1034:a8ff:fe79:aba3  prefixlen 64  scopeid 0x20<link>
            ether 12:34:a8:79:ab:a3  txqueuelen 0  (Ethernet)
            RX packets 1017  bytes 100422 (98.0 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1869  bytes 145519 (142.1 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth3079eb6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::90c2:62ff:fe84:fb53  prefixlen 64  scopeid 0x20<link>
            ether 92:c2:62:84:fb:53  txqueuelen 0  (Ethernet)
            RX packets 4891  bytes 714845 (698.0 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 5127  bytes 829516 (810.0 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth3be8c1f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::c8a5:64ff:fe15:be95  prefixlen 64  scopeid 0x20<link>
            ether ca:a5:64:15:be:95  txqueuelen 0  (Ethernet)
            RX packets 210  bytes 27750 (27.0 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 307  bytes 35118 (34.2 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth559a1ab: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::100b:23ff:fe60:3752  prefixlen 64  scopeid 0x20<link>
            ether 12:0b:23:60:37:52  txqueuelen 0  (Ethernet)
            RX packets 14926  bytes 1931413 (1.8 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 14375  bytes 19695295 (18.7 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth5c05729: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::cca1:4ff:fe5d:14cd  prefixlen 64  scopeid 0x20<link>
            ether ce:a1:04:5d:14:cd  txqueuelen 0  (Ethernet)
            RX packets 455  bytes 797963 (779.2 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 681  bytes 83904 (81.9 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth85ba9a9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::74ca:90ff:feae:6f4d  prefixlen 64  scopeid 0x20<link>
            ether 76:ca:90:ae:6f:4d  txqueuelen 0  (Ethernet)
            RX packets 19  bytes 1404 (1.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 66  bytes 4568 (4.4 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    vetha069d16: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::accd:eeff:fe21:6eda  prefixlen 64  scopeid 0x20<link>
            ether ae:cd:ee:21:6e:da  txqueuelen 0  (Ethernet)
            RX packets 3566  bytes 7353788 (7.0 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 2560  bytes 278400 (271.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    vetha58e4af: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::6cd2:16ff:fee2:aa59  prefixlen 64  scopeid 0x20<link>
            ether 6e:d2:16:e2:aa:59  txqueuelen 0  (Ethernet)
            RX packets 779  bytes 62585 (61.1 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1014  bytes 109417 (106.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    vethb7bbef5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::5ce6:6fff:fe31:c3e  prefixlen 64  scopeid 0x20<link>
            ether 5e:e6:6f:31:0c:3e  txqueuelen 0  (Ethernet)
            RX packets 589  bytes 55654 (54.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 573  bytes 74014 (72.2 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    vethbda3e0a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::9c0a:f2ff:fea5:23a2  prefixlen 64  scopeid 0x20<link>
            ether 9e:0a:f2:a5:23:a2  txqueuelen 0  (Ethernet)
            RX packets 490  bytes 47064 (45.9 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 645  bytes 77464 (75.6 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    vethfc65cc3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::b854:dcff:feb4:f4ba  prefixlen 64  scopeid 0x20<link>
            ether ba:54:dc:b4:f4:ba  txqueuelen 0  (Ethernet)
            RX packets 503  bytes 508251 (496.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 565  bytes 73145 (71.4 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


    Node2 - ifconfig

    [root@ip-10-52-2-54 ec2-user]# ifconfig
    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 8951
            inet 10.20.48.1  netmask 255.255.255.0  broadcast 0.0.0.0
            inet6 fe80::42:87ff:fe39:2ef0  prefixlen 64  scopeid 0x20<link>
            ether 02:42:87:39:2e:f0  txqueuelen 0  (Ethernet)
            RX packets 269123  bytes 22165441 (21.1 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 419870  bytes 149980299 (143.0 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
            inet 10.52.2.54  netmask 255.255.255.224  broadcast 10.52.2.63
            inet6 fe80::9a:d8ff:fed3:4cf5  prefixlen 64  scopeid 0x20<link>
            ether 02:9a:d8:d3:4c:f5  txqueuelen 1000  (Ethernet)
            RX packets 1517512  bytes 938147149 (894.6 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 1425156  bytes 1265738472 (1.1 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 8951
            inet 10.20.48.0  netmask 255.255.0.0  broadcast 0.0.0.0
            ether 06:69:bf:c6:8a:12  txqueuelen 0  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 1 overruns 0  carrier 0  collisions 0

    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1  (Local Loopback)
            RX packets 106  bytes 8792 (8.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 106  bytes 8792 (8.5 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    veth9f05785: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 8951
            inet6 fe80::d81e:d3ff:fe5e:bade  prefixlen 64  scopeid 0x20<link>
            ether da:1e:d3:5e:ba:de  txqueuelen 0  (Ethernet)
            RX packets 31  bytes 2458 (2.4 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 37  bytes 4454 (4.3 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

很少混淆两个if配置输出

共有2个答案

施文彬
2023-03-14

看起来flannel在node2上没有正常运行。您应该检查日志和配置,正如Pawan已经指出的那样。

此外,您似乎正在使用Kubernetes的旧版本。当前版本是1.5,我建议使用此版本。

在网上找到的裸机设置指南往往很快就过时了,即使是官方的库伯内特斯指南。

我建议不再使用这些指南,而是使用(半)自动化部署解决方案,如kargo(基于Ansible)或kops(仅基于AWS,基于Go)。如果您不想使用这些自动解决方案,可以尝试使用kubeadm,它目前处于alpha状态,但可能已经足够适合您。

孟浩然
2023-03-14

检查node-1上的flanneld进程,node-1中缺少flannel.1接口,检查 /var/log/message并比较两个节点的flannel配置文件 /etc/sysconfig/flannel

 类似资料:
  • 我有如下代码: 在我的属性文件中,我有: 这不起作用,parseFooBar在第一秒每分钟都被执行。 但是,如果我添加字段: 所以我可以做一个日志,看看它是什么,parseFooBar不会被执行。移除注入的字符串会再次看到parseFooBar执行。我做错了什么? 编辑:这是使用Spring 4.1.5,Spring Boot 1.2.1和JDK 8 编辑2:将注释移动到类型也有效。(无需强制使用

  • 在我的项目中,我使用了@Configuration、@EnableAutoConfiguration、@ComponentScan和带有注释的重要资源配置。我没有使用@SpringBootApplication,但应用程序在没有@SpringBootApplication注释的情况下成功构建。我不明白为什么不调用@RestController类?

  • 我有一个错误,现在在生产中出现了两次,其中一个分叉/连接池停止工作,尽管它有工作要做并且正在添加更多工作。 这是我到目前为止得出的结论,以解释为什么要执行的任务队列被填满并且任务结果流停止。我有线程转储,其中我的任务生产者线程正在等待fork/连接提交完成,但是没有ForkJoinpool工作线程对此做任何事情。 不管我在做什么,这都不应该发生,对吗?线程转储来自检测到初始条件后的许多小时。我还有

  • 我试图用一个服务人员使用Workbox制作一个非常基本的PWA,但是我有一个问题。我正在使用命令行界面来生成服务工作人员,一切正常,完美的亮点,但我不能将我的index.html添加到运行时缓存中。我必须将其添加到全局模式,以便我的网站在离线模式下工作,但当我更新index.html文件时,除非我清除缓存,否则不会更新。我想要和我的js和css一样的东西。当我升级这些文件时,它们会更新。这是我的工

  • 我已经在Ubuntu 16.04(VM实例)中安装并配置了kibana。当我运行bin文件时,kibana没有启动。日志文件有如下错误消息。"type":"log","@time戳":"2017-09-08T11:21:53Z","tags":["status","plugin:kibana@5.2.2","info"],"pid": 7300,"state":"green","消息":"stat