当前位置: 首页 > 知识库问答 >
问题:

Kubernetes节点端口仅在Pod主机上工作

宗沛
2023-03-14

我刚开始用几个Raspberry pi设备创建自己的Kubernetes集群。我用的是Alex Ellis的指南。但是我有一个问题,我的NodePort只能从实际运行容器的Pod中工作。因此没有从不运行容器的Pod进行重定向。

服务部署(&D

apiVersion: v1
kind: Service
metadata:
  name: markdownrender
  labels:
    app: markdownrender
spec:
  type: NodePort
  externalTrafficPolicy: Cluster
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31118
  selector:
    app: markdownrender
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: markdownrender
  labels:
   app: markdownrender
spec:
  replicas: 2
  selector:
    matchLabels:
      app: markdownrender
  template:
    metadata:
      labels:
        app: markdownrender
    spec:
      containers:
      - name: markdownrender
        image: functions/markdownrender:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP

kubectl get服务

kubernetes       ClusterIP   10.96.0.1     <none>        443/TCP          111m
markdownrender   NodePort    10.104.5.83   <none>        8080:31118/TCP   102m
markdownrender   2/2     2            2           101m
markdownrender-f9744b577-pcplc   1/1     Running   1          90m   10.244.1.2   kube-node233   <none>           <none>
markdownrender-f9744b577-x4j4k   1/1     Running   1          90m   10.244.3.2   kube-node232   <none>           <none>

curl http://127.0.0.1:31118-d“#test”--max-time 1在不同于主机的节点上kube-node233kube-node232总是返回连接定时。

sudo iptables-保存(在230主节点上)

# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:19 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:05:19 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:19 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:05:20 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:20 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:05:20 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

sudo iptables-保存(没有运行容器的节点231)

# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

sudo iptables-保存(节点232谁的pod运行容器)

# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

我还检查了“NodePort只在Pod主机上工作”和“NodePort只在运行Pod的节点上响应”,但仍然没有成功。

共有1个答案

公良征
2023-03-14

如果您在cloudprovider上运行,您可能需要为您的文章中列出的节点打开一个防火墙规则:nodeport

如果问题仍然存在,则可能是吊舱网络出现问题。除非我们访问集群,否则很难确定根本原因。不过下面的帖子可能会有帮助。

https://github.com/kubernetes/kubernetes/issues/58908 https://github.com/kubernetes/kubernetes/issues/70222

 类似资料:
  • 我运行的是vanilla EKS Kubernetes版本1.12。 我已经使用CNI Genie允许自定义选择的CNI,豆荚使用时启动,我已经安装了标准的Calico CNI设置。 但是,如果一个吊舱移动到不同的工作节点,那么它们之间的网络在集群内就不能工作。 我检查了calico auto配置的worker节点上的路由表,在我看来这是合乎逻辑的。 下面是我在所有名称空间中的广泛的pod列表:

  • 我们在一个3节点kubernetes集群上用3个pods运行我们的应用程序。当我们部署应用程序时,有时,pods被调度到同一个kubernetes节点。 我们希望我们的 Pod 以这样一种方式调度,即它将我们的 Pod 分布在节点上(同一应用程序的 2 个 Pod 不应该是同一个节点)。事实上,根据文档(https://kubernetes.io/docs/concepts/configurati

  • 有人知道如何从kubernetes主节点删除pod吗?我在裸机ubuntu服务器上有一个主节点。当我试图用“kubectl删除pod…”删除它时,或者从那里强制删除:https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/它不起作用。pod正在一次又一次地创建…

  • 我对container worrld是新手,并试图在两个linux VM中本地设置一个kubernetes集群。在集群初始化期间,它卡在 KubeADM-1.6.0-0.x86_64.rpm KubectL-1.6.0-0.x86_64.rpm Kubelet-1.6.0-0.x86_64.rpm

  • 2.)我已经通过在kubernetes中创建LoadBalancer类型的服务向外部世界公开了我的REACT应用程序,并且我能够从浏览器访问REACT应用程序endpoint。现在,是否可以从节点内部的REACT应用程序访问EXPRESS应用程序而不向外部世界公开我的EXPRESS应用程序?如何实现这一点? 提前谢了。

  • 问题内容: 我已经为kubernetes中的front(REACT)和backend(EXPRESS NODE JS)项目泊坞窗并创建了部署和服务。我已经在Google Cloud Platform中的两个 Pod (即 一个 Pod- > REACT APP和SECOND POD-> EXPRESS NODE JS )的同一节点的Kubernetes(单节点集群)中成功部署了。 题: 1.)如何