当前位置: 首页 > 知识库问答 >
问题:

Kubernetes豆荚不能使用ClusterIP互相ping

慕健
2023-03-14
NAMESPACE       NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE          NOMINATED NODE
default         pod/busybox                               1/1     Running   62         2d14h   192.168.1.37   kubenode      <none>
default         pod/dnstools                              1/1     Running   0          2d13h   192.168.1.45   kubenode      <none>
default         pod/nginx-deploy-7c45b84548-ckqzb         1/1     Running   0          6d11h   192.168.1.5    kubenode      <none>
default         pod/nginx-deploy-7c45b84548-vl4kh         1/1     Running   0          6d11h   192.168.1.4    kubenode      <none>
dmi             pod/elastic-deploy-5d7c85b8c-btptq        1/1     Running   0          2d14h   192.168.1.39   kubenode      <none>
kube-system     pod/calico-node-68lc7                     2/2     Running   0          6d11h   10.62.194.5    kubenode      <none>
kube-system     pod/calico-node-9c2jz                     2/2     Running   0          6d12h   10.62.194.4    kubemaster    <none>
kube-system     pod/coredns-5c98db65d4-5nprd              1/1     Running   0          6d12h   192.168.0.2    kubemaster    <none>
kube-system     pod/coredns-5c98db65d4-5vw95              1/1     Running   0          6d12h   192.168.0.3    kubemaster    <none>
kube-system     pod/etcd-kubemaster                       1/1     Running   0          6d12h   10.62.194.4    kubemaster    <none>
kube-system     pod/kube-apiserver-kubemaster             1/1     Running   0          6d12h   10.62.194.4    kubemaster    <none>
kube-system     pod/kube-controller-manager-kubemaster    1/1     Running   1          6d12h   10.62.194.4    kubemaster    <none>
kube-system     pod/kube-proxy-9hcgv                      1/1     Running   0          6d11h   10.62.194.5    kubenode      <none>
kube-system     pod/kube-proxy-bxw9s                      1/1     Running   0          6d12h   10.62.194.4    kubemaster    <none>
kube-system     pod/kube-scheduler-kubemaster             1/1     Running   1          6d12h   10.62.194.4    kubemaster    <none>
kube-system     pod/tiller-deploy-767d9b9584-5k95j        1/1     Running   0          3d9h    192.168.1.8    kubenode      <none>
nginx-ingress   pod/nginx-ingress-66wts                   1/1     Running   0          5d17h   192.168.1.6    kubenode      <none>
apiServer:
certSANs:
- 10.62.194.4
extraArgs:
    authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
NAMESPACE     NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       service/kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP                  6d12h   <none>
default       service/nginx-deploy    ClusterIP   10.97.5.194     <none>        80/TCP                   5d17h   run=nginx
dmi           service/elasticsearch   ClusterIP   10.107.84.159   <none>        9200/TCP,9300/TCP        2d14h   app=dmi,component=elasticse
dmi           service/metric-server   ClusterIP   10.106.117.2    <none>        8098/TCP                 2d14h   app=dmi,component=metric-se
kube-system   service/calico-typha    ClusterIP   10.97.201.232   <none>        5473/TCP                 6d12h   k8s-app=calico-typha
kube-system   service/kube-dns        ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   6d12h   k8s-app=kube-dns
kube-system   service/tiller-deploy   ClusterIP   10.98.133.94    <none>        44134/TCP                3d9h    app=helm,name=tiller
;; connection timed out; no servers could be reached

command terminated with exit code 1
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
NAME       ENDPOINTS                                                  AGE
kube-dns   192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more...   6d13h
/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
    null
    null

在节点上:在节点上,我执行了kubeadm join命令,使用了从主服务器上的kubeadm令牌create--print-join-command打印出来的命令

暂时还没有答案

 类似资料:
  • 并且不能ping它在其他节点上的副本。 以下是主机接口: 节点上的进程:

  • 以下是两位工作人员的日志,这将有助于强调这个问题: 工人-0: 工人2:

  • PostgreService.yaml 已创建服务的终结点为 然后我在pod(kubectl exec-it mark-dummy-pod bash)内运行ping172.31.6.149,但不工作。(ping localhost正在工作)

  • 我们有一个应用程序,其中包含 4 个 pod,并使用负载均衡器运行!我们想尝试滚动更新,但我们不确定当 Pod 出现故障时会发生什么!文档不清楚!特别是《豆荚的终止》中的这句话: Pod将从服务的endpoint列表中删除,并且不再被视为复制控制器的运行Pod集的一部分。缓慢关闭的Pod可以继续为流量提供服务,因为负载平衡器(如服务代理)将它们从轮换中删除。 因此,如果有人能在以下问题上指导我们:

  • 我在一个有3个节点的kubernetes集群上运行nginx。 我想知道是否有任何好处,例如,有4个豆荚和限制他们的CPU/MEM约。节点容量的1/4相对于每个节点运行一个pod,限制CPU/MEM,以便pod可以使用整个节点的资源(为了简单起见,我们将cubernet服务排除在等式之外)。 我的感觉是,豆荚越少,开销就越小,每个节点使用1个豆荚应该是性能最好的? 提前致谢

  • 客户端版本:version.info{Major:“1”,Minor:“0”,GitVersion:“V1.0.4”,GitCommit:“65D28D5FD12345592405714C81CD03B9C41D41D9”,GitTreesteat:“Clean”} 服务器版本:version.info{Major:“1”,Minor:“2”,GitVersion:“V1.2.0”,GitComm