我有一个问题试图exec'ing到一个容器:
kubectl exec -it busybox-68654f944b-hj672 -- nslookup kubernetes
Error from server: error dialing backend: dial tcp: lookup worker2 on 127.0.0.53:53: server misbehaving
或从容器中获取日志:
kubectl -n kube-system logs kube-dns-598d7bf7d4-p99qr kubedns
Error from server: Get https://worker3:10250/containerLogs/kube-system/kube-dns-598d7bf7d4-p99qr/kubedns: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
我的想法快用完了。。。我一直在努力学习kubernetes,但我已经在DigitalOcean上安装了它,并使用Flannel
进行pod网络连接(我还使用了DigitalOcean cloud manager
,似乎工作得很好)。
而且,它似乎kube-proxy
工作正常,日志中的一切看起来都很好,iptable
配置看起来很好(对我来说/a noob)
E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
E0522 12:22:32 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.32.0.1:443: getsockopt: no route to host
I0522 12:22:32 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0522 12:22:33 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
F0522 12:22:34 dns.go:167] Timeout waiting for initialization
I0522 12:36:37 flags.go:27] FLAG: --alsologtostderr="false"
I0522 12:36:37 flags.go:27] FLAG: --bind-address="0.0.0.0"
I0522 12:36:37 flags.go:27] FLAG: --cleanup="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-iptables="false"
I0522 12:36:37 flags.go:27] FLAG: --cleanup-ipvs="true"
I0522 12:36:37 flags.go:27] FLAG: --cluster-cidr=""
I0522 12:36:37 flags.go:27] FLAG: --config="/var/lib/kube-proxy/kube-proxy-config.yaml"
I0522 12:36:37 flags.go:27] FLAG: --config-sync-period="15m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max="0"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-max-per-core="32768"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-min="131072"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I0522 12:36:37 flags.go:27] FLAG: --feature-gates=""
I0522 12:36:37 flags.go:27] FLAG: --healthz-bind-address="0.0.0.0:10256"
I0522 12:36:37 flags.go:27] FLAG: --healthz-port="10256"
I0522 12:36:37 flags.go:27] FLAG: --help="false"
I0522 12:36:37 flags.go:27] FLAG: --hostname-override=""
I0522 12:36:37 flags.go:27] FLAG: --iptables-masquerade-bit="14"
I0522 12:36:37 flags.go:27] FLAG: --iptables-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --iptables-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-min-sync-period="0s"
I0522 12:36:37 flags.go:27] FLAG: --ipvs-scheduler=""
I0522 12:36:37 flags.go:27] FLAG: --ipvs-sync-period="30s"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-burst="10"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0522 12:36:37 flags.go:27] FLAG: --kube-api-qps="5"
I0522 12:36:37 flags.go:27] FLAG: --kubeconfig=""
I0522 12:36:37 flags.go:27] FLAG: --log-backtrace-at=":0"
I0522 12:36:37 flags.go:27] FLAG: --log-dir=""
I0522 12:36:37 flags.go:27] FLAG: --log-flush-frequency="5s"
I0522 12:36:37 flags.go:27] FLAG: --logtostderr="true"
I0522 12:36:37 flags.go:27] FLAG: --masquerade-all="false"
I0522 12:36:37 flags.go:27] FLAG: --master=""
I0522 12:36:37 flags.go:27] FLAG: --metrics-bind-address="127.0.0.1:10249"
I0522 12:36:37 flags.go:27] FLAG: --nodeport-addresses="[]"
I0522 12:36:37 flags.go:27] FLAG: --oom-score-adj="-999"
I0522 12:36:37 flags.go:27] FLAG: --profiling="false"
I0522 12:36:37 flags.go:27] FLAG: --proxy-mode=""
I0522 12:36:37 flags.go:27] FLAG: --proxy-port-range=""
I0522 12:36:37 flags.go:27] FLAG: --resource-container="/kube-proxy"
I0522 12:36:37 flags.go:27] FLAG: --stderrthreshold="2"
I0522 12:36:37 flags.go:27] FLAG: --udp-timeout="250ms"
I0522 12:36:37 flags.go:27] FLAG: --v="4"
I0522 12:36:37 flags.go:27] FLAG: --version="false"
I0522 12:36:37 flags.go:27] FLAG: --vmodule=""
I0522 12:36:37 flags.go:27] FLAG: --write-config-to=""
I0522 12:36:37 feature_gate.go:226] feature gates: &{{} map[]}
I0522 12:36:37 iptables.go:589] couldn't get iptables-restore version; assuming it doesn't support --wait
I0522 12:36:37 server_others.go:140] Using iptables Proxier.
I0522 12:36:37 proxier.go:346] minSyncPeriod: 0s, syncPeriod: 30s, burstSyncs: 2
I0522 12:36:37 server_others.go:174] Tearing down inactive rules.
I0522 12:36:37 server.go:444] Version: v1.10.2
I0522 12:36:37 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I0522 12:36:37 server.go:470] Running in resource-only container "/kube-proxy"
I0522 12:36:37 healthcheck.go:309] Starting goroutine for healthz on 0.0.0.0:10256
I0522 12:36:37 server.go:591] getConntrackMax: using conntrack-min
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0522 12:36:37 conntrack.go:52] Setting nf_conntrack_max to 131072
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0522 12:36:37 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0522 12:36:37 bounded_frequency_runner.go:170] sync-runner Loop running
I0522 12:36:37 config.go:102] Starting endpoints config controller
I0522 12:36:37 config.go:202] Starting service config controller
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for service config controller
I0522 12:36:37 reflector.go:202] Starting reflector *core.Endpoints (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Endpoints from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:202] Starting reflector *core.Service (15m0s) from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 reflector.go:240] Listing and watching *core.Service from k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kubernetes-dashboard:" to [10.244.0.2:8443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/hostnames:" to [10.244.0.3:9376 10.244.0.4:9376 10.244.0.4:9376]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "default/kubernetes:https" to [10.133.52.77:6443 10.133.55.62:6443 10.133.55.73:6443]
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns" to []
I0522 12:36:37 endpoints.go:234] Setting endpoints for "kube-system/kube-dns:dns-tcp" to []
I0522 12:36:37 config.go:124] Calling handler.OnEndpointsAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 config.go:224] Calling handler.OnServiceAdd
I0522 12:36:37 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for service config controller
I0522 12:36:37 config.go:210] Calling handler.OnServiceSynced()
I0522 12:36:37 proxier.go:623] Not syncing iptables until Services and Endpoints have been received from master
I0522 12:36:37 proxier.go:619] syncProxyRules took 38.306µs
I0522 12:36:37 shared_informer.go:123] caches populated
I0522 12:36:37 controller_utils.go:1026] Caches are synced for endpoints config controller
I0522 12:36:37 config.go:110] Calling handler.OnEndpointsSynced()
I0522 12:36:37 service.go:310] Adding new service port "default/kubernetes:https" at 10.32.0.1:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns" at 10.32.0.10:53/UDP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kube-dns:dns-tcp" at 10.32.0.10:53/TCP
I0522 12:36:37 service.go:310] Adding new service port "kube-system/kubernetes-dashboard:" at 10.32.0.175:443/TCP
I0522 12:36:37 service.go:310] Adding new service port "default/hostnames:" at 10.32.0.16:80/TCP
I0522 12:36:37 proxier.go:642] Syncing iptables rules
I0522 12:36:37 iptables.go:321] running iptables-save [-t filter]
I0522 12:36:37 iptables.go:321] running iptables-save [-t nat]
I0522 12:36:37 iptables.go:381] running iptables-restore [--noflush --counters]
I0522 12:36:37 healthcheck.go:235] Not saving endpoints for unknown healthcheck "default/hostnames"
I0522 12:36:37 proxier.go:619] syncProxyRules took 62.713913ms
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:38 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
I0522 12:36:40 config.go:141] Calling handler.OnEndpointsUpdate
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */
MASQUERADE all -- 172.17.0.0/16 anywhere
RETURN all -- 10.244.0.0/16 10.244.0.0/16
MASQUERADE all -- 10.244.0.0/16 !base-address.mcast.net/4
RETURN all -- !10.244.0.0/16 worker3/24
MASQUERADE all -- !10.244.0.0/16 10.244.0.0/16
CNI-9f557b5f70a3ef9b57012dc9 all -- 10.244.0.0/16 anywhere /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
CNI-3f77e9111033967f6fe3038c all -- 10.244.0.0/16 anywhere /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
Chain CNI-3f77e9111033967f6fe3038c (1 references)
target prot opt source destination
ACCEPT all -- anywhere 10.244.0.0/16 /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
MASQUERADE all -- anywhere !base-address.mcast.net/4 /* name: "bridge" id: "3b535dda0868b2d75046fc76de3279de2874652b6731a87815908ecf40dd1924" */
Chain CNI-9f557b5f70a3ef9b57012dc9 (1 references)
target prot opt source destination
ACCEPT all -- anywhere 10.244.0.0/16 /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
MASQUERADE all -- anywhere !base-address.mcast.net/4 /* name: "bridge" id: "0d9b7e94498291d71ff1952655da822ab1a1f7c4e080d119ff0ca84a506f05f5" */
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x8000
Chain KUBE-MARK-MASQ (10 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-372W2QPHULAJK7KN (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.52.77 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255 tcp to:10.133.52.77:6443
Chain KUBE-SEP-F5C5FPCVD73UOO2K (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.55.73 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255 tcp to:10.133.55.73:6443
Chain KUBE-SEP-LFOBDGSNKNVH4XYX (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.133.55.62 anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255 tcp to:10.133.55.62:6443
Chain KUBE-SEP-NBPTKIZVPOJSUO47 (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.4 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.4:9376
KUBE-MARK-MASQ all -- 10.244.0.4 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.4:9376
Chain KUBE-SEP-OT5RYZRAA2AMYTNV (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.2 anywhere /* kube-system/kubernetes-dashboard: */
DNAT tcp -- anywhere anywhere /* kube-system/kubernetes-dashboard: */ tcp to:10.244.0.2:8443
Chain KUBE-SEP-XDZOTYYMKVEAAZHH (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.3 anywhere /* default/hostnames: */
DNAT tcp -- anywhere anywhere /* default/hostnames: */ tcp to:10.244.0.3:9376
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.32.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.175 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-SVC-XGLOHA7QRQ3V22RZ tcp -- anywhere 10.32.0.175 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:https
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.32.0.16 /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-SVC-NWV5X2332I4OT4T3 tcp -- anywhere 10.32.0.16 /* default/hostnames: cluster IP */ tcp dpt:http
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-372W2QPHULAJK7KN all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-372W2QPHULAJK7KN side: source mask: 255.255.255.255
KUBE-SEP-LFOBDGSNKNVH4XYX all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-LFOBDGSNKNVH4XYX side: source mask: 255.255.255.255
KUBE-SEP-F5C5FPCVD73UOO2K all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-F5C5FPCVD73UOO2K side: source mask: 255.255.255.255
KUBE-SEP-372W2QPHULAJK7KN all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.33332999982
KUBE-SEP-LFOBDGSNKNVH4XYX all -- anywhere anywhere /* default/kubernetes:https */ statistic mode random probability 0.50000000000
KUBE-SEP-F5C5FPCVD73UOO2K all -- anywhere anywhere /* default/kubernetes:https */
Chain KUBE-SVC-NWV5X2332I4OT4T3 (1 references)
target prot opt source destination
KUBE-SEP-XDZOTYYMKVEAAZHH all -- anywhere anywhere /* default/hostnames: */ statistic mode random probability 0.33332999982
KUBE-SEP-NBPTKIZVPOJSUO47 all -- anywhere anywhere /* default/hostnames: */ statistic mode random probability 0.50000000000
KUBE-SEP-NBPTKIZVPOJSUO47 all -- anywhere anywhere /* default/hostnames: */
Chain KUBE-SVC-XGLOHA7QRQ3V22RZ (1 references)
target prot opt source destination
KUBE-SEP-OT5RYZRAA2AMYTNV all -- anywhere anywhere /* kube-system/kubernetes-dashboard: */
W12:43:36 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:36 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:46 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:46 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:43:56 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:43:56 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W12:44:06 prober.go:103] No ref for container "containerd://6405ae121704b15554e019beb622fbcf991e0d3c75b20eab606e147dc1e6966f" (kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns)
I12:44:06 prober.go:111] Readiness probe for "kube-dns-598d7bf7d4-p99qr_kube-system(46cf8d8f-5d11-11e8-b2be-eefd92698760):kubedns" failed (failure): Get http://10.244.0.2:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
系统服务:
/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \
--image-pull-progress-deadline=2m \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--network-plugin=cni \
--register-node=true \
--v=2 \
--cloud-provider=external \
--allow-privileged=true
kubelet-config.yaml:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "10.244.0.0/16"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/worker3.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/worker3-key.pem"
系统服务:
ExecStart=/usr/local/bin/kube proxy\--config=/var/lib/kube proxy/kube-proxy-config.yaml-v 4
kube-proxy-config.yaml:
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.244.0.0/16"
库贝科菲格:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ASLDJL...ALKJDS=
server: https://206.x.x.7:6443
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: system:kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:kube-proxy
user:
client-certificate-data: ASDLJAL ... ALDJS
client-key-data: LS0tLS1CRUdJ...ASDJ
ExecStart=/usr/local/bin/kube-apiserver \
--advertise-address=10.133.55.62 \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--enable-swagger-ui=true \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
--etcd-servers=https://10.133.55.73:2379,https://10.133.52.77:2379,https://10.133.55.62:2379 \
--event-ttl=1h \
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
--kubelet-https=true \
--runtime-config=api/all \
--service-account-key-file=/var/lib/kubernetes/service-account.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--v=2
ExecStart=/usr/local/bin/kube-controller-manager \
--address=0.0.0.0 \
--cluster-cidr=10.244.0.0/16 \
--allocate-node-cidrs=true \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--leader-elect=true \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
https://pastebin.com/hah0uSFX(因为帖子太长了!)
路线
:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 0 0 0 eth0
10.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.133.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cnio0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
206.189.96.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0
ip路由获取10.32.0.1
:10.32.0.1通过206.189.96.1 dev eth0 src 206.189.96.121 uid 0
curl -k https://10.32.0.1:443/version
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.2",
"gitCommit": "81753b10df112992bf51bbc2c2f85208aad78335",
"gitTreeState": "clean",
"buildDate": "2018-04-27T09:10:24Z",
"goVersion": "go1.9.3",
"compiler": "gc",
"platform": "linux/amd64"
}
重新启动会启动所有工作进程和POD,包括kube dns,因此它们不再崩溃,但在尝试执行exec或run时,我仍然存在一些问题:
kubectl run test --image=ubuntu -it --rm bash
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: error dialing backend: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehaving
Error from server: Get https://worker3:10250/containerLogs/default/test-6954947c4f-6gkdl/test: dial tcp: lookup worker3 on 127.0.0.53:53: server misbehavin
我在尝试执行容器时遇到问题
如您所见,Kubernetes正在尝试使用worker1
等名称连接到您的节点,这在您的网络中无法解析。
你有2种方法来解决它:
worker1
可以解析库伯内特斯组件,也许一些自定义DNS服务器或/etc/host
中的记录。更新:
来自@Richard87,供将来参考:第三种方法是使用选项--kubelet preferred address types=InternalIP、ExternalIP、Hostname
。
我用Laravel开发了自己的网站。它在Localhost中运行良好。 所以我已经将我的文件移动到实时服务器(byethost.com)。当我试图访问我的网站,我得到了一个错误(服务器错误500)。我是新来的。请任何人帮助我解决我的问题。 注意:已将所有本地文件移动到服务器中的public_html文件夹中。然后将文件移动到服务器中的目录。并且还更改了应用程序文件中的路径。 代码: 免费托管网址:
我在从服务器构建时遇到问题。我的项目是一个使用Netbeans IDE的Android应用程序。当我运行我的应用程序时,一切正常,编译器完全不报告任何错误。但是当我向服务器发送build Netbeans时,我会报告一个成功的构建,但是当我登录到构建服务器时,我看到的是一个构建失败并带有错误日志。 以下是我从生成服务器获得的错误日志: 构建失败/home/ec2 user/android sdk/
大家好,我刚刚按照给出的步骤进行操作,以便使用parse4cn1 lib实现推送通知。在发送Android构建时,我收到了此错误构建,我真的不知道下一步该做什么,因为我仔细按照说明进行操作。任何帮助将不胜感激。下面是导致错误的链接.txt文件 https://drive.google.com/open?id=0ByIex_y9vYt5UGpfNS1YdkdLZ0k
我在GAE上开发应用程序,我在本地测试网站,但是,每次我试图将其部署到GAE时,它都报告错误:服务器错误 我在谷歌搜索了很多,似乎没有答案可以解决我的问题。当我在GAE应用程序中查找日志时,以下是我目前发现的主要问题。最初,我认为这是JDK8的原因,但当我设置JDK8时,我甚至不能在本地运行应用程序!
由于Grav配置错误导致服务器错误。 当服务器遇到内部错误或意外发生的事情时,Grav无法提供并恢复页面。 如果服务器在生产模式下运行,要隐藏用户的信息,则会出现服务器错误消息。 所有错误消息都记录在文件《your_folder_name》/logs/Grav.log下的Grav.log文件中。 以下是可能导致服务器错误的一些原因 - 过时的配置 文件权限不正确 格式化的配置文件无效 Grav不知
我是新的角度6。我用spring boot开发了一个post服务,当我通过postman测试它时,它工作得很好,但当我用web浏览器测试它时,它给了我这个错误: HttpErrorResponse ;{headers:HttpHeaders,status:500,statusText:“ok”,URL:“http://localhost:8080/api/test/ordermiss”,ok:fa