当前位置: 首页 > 知识库问答 >
问题:

无法解析带有heapster和kube-dns的Kubernetes上的monitoring-influxdb

仲孙兴旺
2023-03-14
Name:           kube-dns-v20-z2dd2
Namespace:      kube-system
Node:           172.31.48.201/172.31.48.201
Start Time:     Mon, 22 Jan 2018 09:21:49 +0000
Labels:         k8s-app=kube-dns
                version=v20
Annotations:    scheduler.alpha.kubernetes.io/critical-pod=
                scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}]
Status:         Running
IP:             172.17.29.4
Controlled By:  ReplicationController/kube-dns-v20
Containers:
  kubedns:
    Container ID:  docker://13f95bdf8dee273ca18a2eee1b99fe00e5fff41279776cdef5d7e567472a39dc
    Image:         gcr.io/google_containers/kubedns-amd64:1.8
    Image ID:      docker-pullable://gcr.io/google_containers/kubedns-amd64@sha256:39264fd3c998798acdf4fe91c556a6b44f281b6c5797f464f92c3b561c8c808c
    Ports:         10053/UDP, 10053/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
    State:          Running
      Started:      Mon, 22 Jan 2018 09:22:05 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9zxzd (ro)
  dnsmasq:
    Container ID:  docker://576ebc30e8f7aae13000a2d06541c165a3302376ad04c604b12803463380d9b5
    Image:         gcr.io/google_containers/kube-dnsmasq-amd64:1.4
    Image ID:      docker-pullable://gcr.io/google_containers/kube-dnsmasq-amd64@sha256:a722df15c0cf87779aad8ba2468cf072dd208cb5d7cfcaedd90e66b3da9ea9d2
    Ports:         53/UDP, 53/TCP
    Args:
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
      --log-facility=-
    State:          Running
      Started:      Mon, 22 Jan 2018 09:22:20 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9zxzd (ro)
  healthz:
    Container ID:  docker://3367d05fb0e13c892243a4c86c74a170b0a9a2042387a70f6690ed946afda4d2
    Image:         gcr.io/google_containers/exechealthz-amd64:1.2
    Image ID:      docker-pullable://gcr.io/google_containers/exechealthz-amd64@sha256:503e158c3f65ed7399f54010571c7c977ade7fe59010695f48d9650d83488c0a
    Port:          8080/TCP
    Args:
      --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
      --url=/healthz-dnsmasq
      --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
      --url=/healthz-kubedns
      --port=8080
      --quiet
    State:          Running
      Started:      Mon, 22 Jan 2018 09:22:32 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  50Mi
    Requests:
      cpu:        10m
      memory:     50Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9zxzd (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-9zxzd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9zxzd
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type    Reason                 Age   From                    Message
  ----    ------                 ----  ----                    -------
  Normal  Scheduled              43m   default-scheduler       Successfully assigned kube-dns-v20-z2dd2 to 172.31.48.201
  Normal  SuccessfulMountVolume  43m   kubelet, 172.31.48.201  MountVolume.SetUp succeeded for volume "default-token-9zxzd"
  Normal  Pulling                43m   kubelet, 172.31.48.201  pulling image "gcr.io/google_containers/kubedns-amd64:1.8"
  Normal  Pulled                 43m   kubelet, 172.31.48.201  Successfully pulled image "gcr.io/google_containers/kubedns-amd64:1.8"
  Normal  Created                43m   kubelet, 172.31.48.201  Created container
  Normal  Started                43m   kubelet, 172.31.48.201  Started container
  Normal  Pulling                43m   kubelet, 172.31.48.201  pulling image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4"
  Normal  Pulled                 42m   kubelet, 172.31.48.201  Successfully pulled image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4"
  Normal  Created                42m   kubelet, 172.31.48.201  Created container
  Normal  Started                42m   kubelet, 172.31.48.201  Started container
  Normal  Pulling                42m   kubelet, 172.31.48.201  pulling image "gcr.io/google_containers/exechealthz-amd64:1.2"
  Normal  Pulled                 42m   kubelet, 172.31.48.201  Successfully pulled image "gcr.io/google_containers/exechealthz-amd64:1.2"
  Normal  Created                42m   kubelet, 172.31.48.201  Created container
  Normal  Started                42m   kubelet, 172.31.48.201  Started container
Name:              kube-dns
Namespace:         kube-system
Labels:            k8s-app=kube-dns
                   kubernetes.io/cluster-service=true
                   kubernetes.io/name=KubeDNS
Annotations:       <none>
Selector:          k8s-app=kube-dns
Type:              ClusterIP
IP:                10.254.0.2
Port:              dns  53/UDP
TargetPort:        53/UDP
Endpoints:         172.17.29.4:53
Port:              dns-tcp  53/TCP
TargetPort:        53/TCP
Endpoints:         172.17.29.4:53
Session Affinity:  None
Events:            <none>
Name:         kube-dns
Namespace:    kube-system
Labels:       k8s-app=kube-dns
              kubernetes.io/cluster-service=true
              kubernetes.io/name=KubeDNS
Annotations:  <none>
Subsets:
  Addresses:          172.17.29.4
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    dns      53    UDP
    dns-tcp  53    TCP

Events:  <none>
Server:    10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local
Server:    (null)
Address 1: 127.0.0.1 localhost
Address 2: ::1 localhost

nslookup: can't resolve 'http://monitoring-influxdb': Try again
command terminated with exit code 1
nameserver 10.254.0.2
search kube-system.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
Server:    10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local

nslookup: can't resolve 'http://monitoring-influxdb'
command terminated with exit code 1
nameserver 10.254.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
E0122 09:22:46.966896       1 influxdb.go:217] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp: lookup monitoring-influxdb on 10.254.0.2:53: server misbehaving, will retry on use
Server:    (null)
Address 1: 127.0.0.1 localhost
Address 2: ::1 localhost

nslookup: can't resolve 'monitoring-influxdb.kube-system': Name does not resolve
command terminated with exit code 1

但无论出于什么原因,busybox都能够解析服务器。

Server:    10.254.0.2
Address 1: 10.254.0.2 kube-dns.kube-system.svc.cluster.local

Name:      monitoring-influxdb.kube-system
Address 1: 10.254.48.109 monitoring-influxdb.kube-system.svc.cluster.local
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
heapster               ClusterIP   10.254.193.208   <none>        80/TCP              1h
kube-dns               ClusterIP   10.254.0.2       <none>        53/UDP,53/TCP       1h
kubernetes-dashboard   NodePort    10.254.89.241    <none>        80:32431/TCP        1h
monitoring-grafana     ClusterIP   10.254.176.96    <none>        80/TCP              1h
monitoring-influxdb    ClusterIP   10.254.48.109    <none>        8083/TCP,8086/TCP   1h
NAME                      ENDPOINTS                           AGE
heapster                  172.17.29.7:8082                    1h
kube-controller-manager   <none>                              1h
kube-dns                  172.17.29.6:53,172.17.29.6:53       1h
kubernetes-dashboard      172.17.29.5:9090                    1h
monitoring-grafana        172.17.29.3:3000                    1h
monitoring-influxdb       172.17.29.3:8086,172.17.29.3:8083   1h

共有1个答案

姜聪
2023-03-14

在kubernetes中,您可以单独通过服务的名称解析服务,但前提是您位于同一个名称空间内。

还可以通过DNS名称访问服务,其格式为:

<service name>.<namespace>

从您的问题不清楚您在哪个名称空间中部署了influxdb,但可以尝试一下上面的建议。

 类似资料:
  • 问题内容: 我的机器位于具有专用DNS服务器和用于DNS解析的专用区域的专用网络上。我可以从主机上解析此区域中的主机,但不能从主机上运行的容器中解析主机。 主持人 : 集装箱 : 很明显,Google的公共DNS服务器不会解决我的私有DNS请求。我知道我可以使用或在中设置强制设置,但是我的笔记本电脑经常切换网络。似乎应该有系统的方法来解决此问题。 问题答案: Docker 通过复制主机的,并过滤掉

  • 当我运行Kubedns的部署时,它会成功启动,但在运行时间的8分钟后,它会被删除。当它运行时,它可以很好地解决dns请求。还有人经历过这种事吗?我开始在其他随机部署中看到这一点。 在删除Kubedns之前:

  • 我知道10.254.0.1:443实际上提供来自主节点(端口6443上的api)(192.168.0.200)的证书,但如何解析,10.254.0.1提供其有效证书。 以下是clusterip API的描述:[root@master01 dns]#kubectl description service kubernetes namespace:default labels:component=ap

  • 我的集群运行的是KubernetesV1.18.2(calico for networking),与KubesPray一起安装。我可以从外部访问我的服务,但当它们需要相互连接时,它们就会失败(例如,当UI试图连接到数据库时)。 下面提供了以下命令的一些输出:https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resol

  • 我们正在AWS的CoreOS上运行kubernetes 1.5.7。我们的kube dns映像版本是 gcr.io/google_containers/kubedns-amd64: 1.9gcr.io/google_containers/kube-dnsmasq-amd64: 1.4.1 我们传递给dnsmasq的参数是 我们在20个节点集群中每个节点运行1个kube dns pod。在过去的几个

  • 我正在虚拟Ubuntu环境中安装一个Kubernetes集群,它已经安装在Docker和Boot2Docker(Windows7/Intel64)组件上。 我已经成功安装了以下软件组件: .Boot2Docker.Docker 1.7.1.Docker上的Ubuntu 14.04.来自GitHub 1.3的最新Kubernetes 并按照安装说明在:http://kubernetes.io/doc