我在3台裸机Centos7服务器上设置了一个Kubernetes集群,其中有一个主服务器和两个从服务器。为此,我使用了kubeadm,遵循以下指南:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/并将Weave Net用于pod网络。
为了进行测试,我设置了2个带有服务的default-http-backends,以公开端口:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend-2
labels:
k8s-app: default-http-backend-2
spec:
template:
metadata:
labels:
k8s-app: default-http-backend-2
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend-2
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend-2
labels:
k8s-app: default-http-backend-2
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend-2
$~ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.111.59.235 <none> 80/TCP 34m
default-http-backend-2 10.106.29.17 <none> 80/TCP 34m
$~ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
default-http-backend-2-990549169-dd29z 1/1 Running 0 35m 10.44.0.1 vm0059
default-http-backend-726995137-9994z 1/1 Running 0 35m 10.36.0.1 vm0058
$~ kubectl exec -it default-http-backend-726995137-9994z sh
/ # wget 10.111.59.235:80
Connecting to 10.111.59.235:80 (10.111.59.235:80)
wget: server returned error: HTTP/1.1 404 Not Found
/ # wget 10.106.29.17:80
Connecting to 10.106.29.17:80 (10.106.29.17:80)
wget: can't connect to remote host (10.106.29.17): No route to host
$~ docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
Go version: go1.7.4
Git commit: 88a4867/1.12.6
Built: Mon Jul 3 16:02:02 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
Go version: go1.7.4
Git commit: 88a4867/1.12.6
Built: Mon Jul 3 16:02:02 2017
OS/Arch: linux/amd64
$~ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$~ iptables-save
*nat
:PREROUTING ACCEPT [7:420]
:INPUT ACCEPT [7:420]
:OUTPUT ACCEPT [17:1020]
:POSTROUTING ACCEPT [21:1314]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3N4EFB5KN7DZON3G - [0:0]
:KUBE-SEP-5LXBJFBNQIVWZQ4R - [0:0]
:KUBE-SEP-5WQPOVEQM6CWLFNI - [0:0]
:KUBE-SEP-64ZDVBFDSQK7XP5M - [0:0]
:KUBE-SEP-6VF4APMJ4DYGM3KR - [0:0]
:KUBE-SEP-TPSZNIDDKODT2QF2 - [0:0]
:KUBE-SEP-TR5ETKVRYPRDASMW - [0:0]
:KUBE-SEP-VMZRVJ7XGG63C7Q7 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-2BEQYC4GXBICFPF4 - [0:0]
:KUBE-SVC-2J3GLVYDXZLHJ7TU - [0:0]
:KUBE-SVC-2QFLXPI3464HMUTA - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-OWOER5CC7DL5WRNU - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-V76ZVCWXDRE26OHU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.30.38.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/driveme-service:" -m tcp --dport 31305 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/driveme-service:" -m tcp --dport 31305 -j KUBE-SVC-2BEQYC4GXBICFPF4
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-server:" -m tcp --dport 31048 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-server:" -m tcp --dport 31048 -j KUBE-SVC-2J3GLVYDXZLHJ7TU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/auth-service:" -m tcp --dport 31722 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/auth-service:" -m tcp --dport 31722 -j KUBE-SVC-V76ZVCWXDRE26OHU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/api-gateway:" -m tcp --dport 32139 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/api-gateway:" -m tcp --dport 32139 -j KUBE-SVC-OWOER5CC7DL5WRNU
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3N4EFB5KN7DZON3G -s 10.32.0.15/32 -m comment --comment "default/api-gateway:" -j KUBE-MARK-MASQ
-A KUBE-SEP-3N4EFB5KN7DZON3G -p tcp -m comment --comment "default/api-gateway:" -m tcp -j DNAT --to-destination 10.32.0.15:8080
-A KUBE-SEP-5LXBJFBNQIVWZQ4R -s 10.32.0.13/32 -m comment --comment "default/registry-server:" -j KUBE-MARK-MASQ
-A KUBE-SEP-5LXBJFBNQIVWZQ4R -p tcp -m comment --comment "default/registry-server:" -m tcp -j DNAT --to-destination 10.32.0.13:8888
-A KUBE-SEP-5WQPOVEQM6CWLFNI -s 172.16.16.102/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-5WQPOVEQM6CWLFNI -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-5WQPOVEQM6CWLFNI --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.16.16.102:6443
-A KUBE-SEP-64ZDVBFDSQK7XP5M -s 10.32.0.12/32 -m comment --comment "default/driveme-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-64ZDVBFDSQK7XP5M -p tcp -m comment --comment "default/driveme-service:" -m tcp -j DNAT --to-destination 10.32.0.12:9595
-A KUBE-SEP-6VF4APMJ4DYGM3KR -s 10.32.0.11/32 -m comment --comment "kube-system/default-http-backend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6VF4APMJ4DYGM3KR -p tcp -m comment --comment "kube-system/default-http-backend:" -m tcp -j DNAT --to-destination 10.32.0.11:8080
-A KUBE-SEP-TPSZNIDDKODT2QF2 -s 10.32.0.14/32 -m comment --comment "default/auth-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-TPSZNIDDKODT2QF2 -p tcp -m comment --comment "default/auth-service:" -m tcp -j DNAT --to-destination 10.32.0.14:9090
-A KUBE-SEP-TR5ETKVRYPRDASMW -s 10.32.0.10/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-TR5ETKVRYPRDASMW -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.10:53
-A KUBE-SEP-VMZRVJ7XGG63C7Q7 -s 10.32.0.10/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-VMZRVJ7XGG63C7Q7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.10:53
-A KUBE-SERVICES -d 10.104.131.183/32 -p tcp -m comment --comment "kube-system/default-http-backend: cluster IP" -m tcp --dport 80 -j KUBE-SVC-2QFLXPI3464HMUTA
-A KUBE-SERVICES -d 10.96.244.116/32 -p tcp -m comment --comment "default/driveme-service: cluster IP" -m tcp --dport 9595 -j KUBE-SVC-2BEQYC4GXBICFPF4
-A KUBE-SERVICES -d 10.108.120.94/32 -p tcp -m comment --comment "default/registry-server: cluster IP" -m tcp --dport 8888 -j KUBE-SVC-2J3GLVYDXZLHJ7TU
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.104.233/32 -p tcp -m comment --comment "default/auth-service: cluster IP" -m tcp --dport 9090 -j KUBE-SVC-V76ZVCWXDRE26OHU
-A KUBE-SERVICES -d 10.98.19.144/32 -p tcp -m comment --comment "default/api-gateway: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-OWOER5CC7DL5WRNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-2BEQYC4GXBICFPF4 -m comment --comment "default/driveme-service:" -j KUBE-SEP-64ZDVBFDSQK7XP5M
-A KUBE-SVC-2J3GLVYDXZLHJ7TU -m comment --comment "default/registry-server:" -j KUBE-SEP-5LXBJFBNQIVWZQ4R
-A KUBE-SVC-2QFLXPI3464HMUTA -m comment --comment "kube-system/default-http-backend:" -j KUBE-SEP-6VF4APMJ4DYGM3KR
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-TR5ETKVRYPRDASMW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-5WQPOVEQM6CWLFNI --mask 255.255.255.255 --rsource -j KUBE-SEP-5WQPOVEQM6CWLFNI
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-5WQPOVEQM6CWLFNI
-A KUBE-SVC-OWOER5CC7DL5WRNU -m comment --comment "default/api-gateway:" -j KUBE-SEP-3N4EFB5KN7DZON3G
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-VMZRVJ7XGG63C7Q7
-A KUBE-SVC-V76ZVCWXDRE26OHU -m comment --comment "default/auth-service:" -j KUBE-SEP-TPSZNIDDKODT2QF2
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Wed Sep 13 09:29:35 2017
# Generated by iptables-save v1.4.21 on Wed Sep 13 09:29:35 2017
*filter
:INPUT ACCEPT [1386:436876]
:FORWARD ACCEPT [67:11075]
:OUTPUT ACCEPT [1379:439138]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+@p dst -m comment --comment "DefaultAllow isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI+kx/2]jD%_c0S%thO%V dst -m comment --comment "DefaultAllow isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -m comment --comment "DefaultAllow isolation for namespace: default" -j ACCEPT
COMMIT
# Completed on Wed Sep 13 09:29:35 2017
有一个想法,这个问题可能是在哪里引起的?
编辑:添加的示例和附加信息
所以,我解决了我的问题。对于任何发现这篇文章并有同样问题的人来说:在我的例子中,节点之间的所有UDP通信都被阻止,只允许TCP。但是DNS是通过UDP处理的,所以这也是允许的。
我已经在节点(node1)上的pod(pod1)上部署了一个Spring Boot应用程序。我还在不同节点(node2)上的另一个pod(pod2)上部署了JMeter。我试图从POD2执行自动负载测试。为了执行负载测试,我要求为每个测试用例重新启动pod1。如何从POD2重新启动pod1?
我正在搜索如何将多播udp数据包发送到我的kubernetes集群中的吊舱 我希望有人能帮助我,如果我应该配置vpn在gcp或其他东西。
我们有一个EKS集群,上面有4个节点和10个微服务(目前)。我们以前有2个节点,没有看到太多问题,但自从增加到4个,事情“随机”停止工作。我相信一个节点的吊舱不能与集群的另一个节点对话。
我已经设置了普罗米修斯,通过跟踪普罗米修斯留档来监控库本内斯的指标。 普罗米修斯现在有很多有用的指标。 但是,我看不到任何引用我的pod或节点状态的指标。 理想情况下-我希望能够绘制pod状态(运行,挂起,CrashLoopBackoff,错误)和节点(NodeNow,就绪)。 这个度量单位在哪里?如果没有,我可以添加到某个地方吗?怎么做?
通过设置两个豆荚来学习Kubernetes,每个豆荚分别运行一个弹性搜索和一个kibana容器。 我的配置文件能够设置两个POD,以及创建两个服务来访问主机的web浏览器上的这些应用程序。 问题是我不知道如何使Kibana容器与ES应用程序/POD通信。 我发现一个博客建议使用而不是。再一次不知道如何让Kibana和ES说话 Kubernetes配置YAML: 下面是我试图在Kubernetes上
这个问题的答案是(Kubernetes会在主节点上运行Docker容器吗?)建议确实可以在主节点上运行用户吊舱--但没有解决是否存在与允许这样做相关的任何问题。 到目前为止,我能找到的唯一信息表明,允许这样做可能存在相关问题,那就是主节点上的吊舱通信不安全(请参见http://kubernetes.io/docs/admin/master-node-communication/和https://g