当前位置: 首页 > 知识库问答 >
问题:

Google Cloud中的Kubernetes与容器创建状态卡在一起的问题

戚俊健
2023-03-14

我的GKE集群遇到了一个问题,所有的吊舱都被容器创建状态卡住了。当我运行kubectl get事件时,我会看到以下错误:

Failed create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

有人知道到底发生了什么吗?我在任何地方都找不到这个解决方案。

Name:               gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/fluentd-ds-ready=true
                    beta.kubernetes.io/instance-type=n1-standard-1
                    beta.kubernetes.io/os=linux
                    cloud.google.com/gke-nodepool=pool-nodes-dev
                    failure-domain.beta.kubernetes.io/region=southamerica-east1
                    failure-domain.beta.kubernetes.io/zone=southamerica-east1-a
                    kubernetes.io/hostname=gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 27 Sep 2018 20:27:47 -0300
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                          Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                          ------  -----------------                 ------------------                ------                       -------
  KernelDeadlock                False   Fri, 28 Sep 2018 09:58:58 -0300   Thu, 27 Sep 2018 20:27:16 -0300   KernelHasNoDeadlock          kernel has no deadlock
  FrequentUnregisterNetDevice   False   Fri, 28 Sep 2018 09:58:58 -0300   Thu, 27 Sep 2018 20:32:18 -0300   UnregisterNetDevice          node is functioning properly
  NetworkUnavailable            False   Thu, 27 Sep 2018 20:27:48 -0300   Thu, 27 Sep 2018 20:27:48 -0300   RouteCreated                 NodeController create implicit route
  OutOfDisk                     False   Fri, 28 Sep 2018 09:59:03 -0300   Thu, 27 Sep 2018 20:27:47 -0300   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure                False   Fri, 28 Sep 2018 09:59:03 -0300   Thu, 27 Sep 2018 20:27:47 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure                  False   Fri, 28 Sep 2018 09:59:03 -0300   Thu, 27 Sep 2018 20:27:47 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure                   False   Fri, 28 Sep 2018 09:59:03 -0300   Thu, 27 Sep 2018 20:27:47 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                         True    Fri, 28 Sep 2018 09:59:03 -0300   Thu, 27 Sep 2018 20:28:07 -0300   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.2
  ExternalIP:
  Hostname:    gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6
Capacity:
 cpu:                1
 ephemeral-storage:  98868448Ki
 hugepages-2Mi:      0
 memory:             3787608Ki
 pods:               110
Allocatable:
 cpu:                940m
 ephemeral-storage:  47093746742
 hugepages-2Mi:      0
 memory:             2702168Ki
 pods:               110
System Info:
 Machine ID:                 1e8e0ecad8f5cc7fb5851bc64513d40c
 System UUID:                1E8E0ECA-D8F5-CC7F-B585-1BC64513D40C
 Boot ID:                    971e5088-6bc1-4151-94bf-b66c6c7ee9a3
 Kernel Version:             4.14.56+
 OS Image:                   Container-Optimized OS from Google
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.10.7-gke.2
 Kube-Proxy Version:         v1.10.7-gke.2
PodCIDR:                     10.0.32.0/24
ProviderID:                  gce://aditumpay/southamerica-east1-a/gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6
Non-terminated Pods:         (11 in total)
  Namespace                  Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                                              ------------  ----------  ---------------  -------------
  kube-system                event-exporter-v0.2.1-5f5b89fcc8-xsvmg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                fluentd-gcp-scaler-7c5db745fc-vttc9                               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                fluentd-gcp-v3.1.0-sz8r8                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                heapster-v1.5.3-75486b456f-sj7k8                                  138m (14%)    138m (14%)  301856Ki (11%)   301856Ki (11%)
  kube-system                kube-dns-788979dc8f-99xvh                                         260m (27%)    0 (0%)      110Mi (4%)       170Mi (6%)
  kube-system                kube-dns-788979dc8f-9sz2b                                         260m (27%)    0 (0%)      110Mi (4%)       170Mi (6%)
  kube-system                kube-dns-autoscaler-79b4b844b9-6s8x2                              20m (2%)      0 (0%)      10Mi (0%)        0 (0%)
  kube-system                kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6    100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kubernetes-dashboard-598d75cb96-6nhcd                             50m (5%)      100m (10%)  100Mi (3%)       300Mi (11%)
  kube-system                l7-default-backend-5d5b9874d5-8wk6h                               10m (1%)      10m (1%)    20Mi (0%)        20Mi (0%)
  kube-system                metrics-server-v0.2.1-7486f5bd67-fvddz                            53m (5%)      148m (15%)  154Mi (5%)       404Mi (15%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests        Limits
  --------  --------        ------
  cpu       891m (94%)      396m (42%)
  memory    817952Ki (30%)  1391392Ki (51%)
Events:     <none>
Name:               gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/fluentd-ds-ready=true
                    beta.kubernetes.io/instance-type=n1-standard-1
                    beta.kubernetes.io/os=linux
                    cloud.google.com/gke-nodepool=pool-nodes-dev
                    failure-domain.beta.kubernetes.io/region=southamerica-east1
                    failure-domain.beta.kubernetes.io/zone=southamerica-east1-a
                    kubernetes.io/hostname=gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 27 Sep 2018 20:30:05 -0300
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                          Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                          ------  -----------------                 ------------------                ------                       -------
  KernelDeadlock                False   Fri, 28 Sep 2018 10:11:03 -0300   Thu, 27 Sep 2018 20:29:34 -0300   KernelHasNoDeadlock          kernel has no deadlock
  FrequentUnregisterNetDevice   False   Fri, 28 Sep 2018 10:11:03 -0300   Thu, 27 Sep 2018 20:34:36 -0300   UnregisterNetDevice          node is functioning properly
  NetworkUnavailable            False   Thu, 27 Sep 2018 20:30:06 -0300   Thu, 27 Sep 2018 20:30:06 -0300   RouteCreated                 NodeController create implicit route
  OutOfDisk                     False   Fri, 28 Sep 2018 10:11:49 -0300   Thu, 27 Sep 2018 20:30:05 -0300   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure                False   Fri, 28 Sep 2018 10:11:49 -0300   Thu, 27 Sep 2018 20:30:05 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure                  False   Fri, 28 Sep 2018 10:11:49 -0300   Thu, 27 Sep 2018 20:30:05 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure                   False   Fri, 28 Sep 2018 10:11:49 -0300   Thu, 27 Sep 2018 20:30:05 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                         True    Fri, 28 Sep 2018 10:11:49 -0300   Thu, 27 Sep 2018 20:30:25 -0300   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.3
  ExternalIP:
  Hostname:    gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz
Capacity:
 cpu:                1
 ephemeral-storage:  98868448Ki
 hugepages-2Mi:      0
 memory:             3787608Ki
 pods:               110
Allocatable:
 cpu:                940m
 ephemeral-storage:  47093746742
 hugepages-2Mi:      0
 memory:             2702168Ki
 pods:               110
System Info:
 Machine ID:                 f1d5cf2a0b2c5472cf6509778a7941a7
 System UUID:                F1D5CF2A-0B2C-5472-CF65-09778A7941A7
 Boot ID:                    f35bebb8-acd7-4a2f-95d6-76604638aef9
 Kernel Version:             4.14.56+
 OS Image:                   Container-Optimized OS from Google
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.10.7-gke.2
 Kube-Proxy Version:         v1.10.7-gke.2
PodCIDR:                     10.0.33.0/24
ProviderID:                  gce://aditumpay/southamerica-east1-a/gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                                              ------------  ----------  ---------------  -------------
  default                    aditum-payment-7d966c494c-wpk2t                                   100m (10%)    0 (0%)      0 (0%)           0 (0%)
  default                    aditum-portal-dev-5c69d76bb6-n5d5b                                100m (10%)    0 (0%)      0 (0%)           0 (0%)
  default                    aditum-vtexapi-5c758fcfb7-rhvsn                                   100m (10%)    0 (0%)      0 (0%)           0 (0%)
  default                    admin-mongo-dev-7d9f7f7d46-rrj42                                  100m (10%)    0 (0%)      0 (0%)           0 (0%)
  default                    mongod-0                                                          200m (21%)    0 (0%)      200Mi (7%)       0 (0%)
  kube-system                fluentd-gcp-v3.1.0-pgwfx                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz    100m (10%)    0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       700m (74%)  0 (0%)
  memory    200Mi (7%)  0 (0%)
Events:     <none>
NAMESPACE     NAME                                                             READY     STATUS              RESTARTS   AGE
default       aditum-payment-7d966c494c-wpk2t                                  0/1       ContainerCreating   0          13h
default       aditum-portal-dev-5c69d76bb6-n5d5b                               0/1       ContainerCreating   0          13h
default       aditum-vtexapi-5c758fcfb7-rhvsn                                  0/1       ContainerCreating   0          13h
default       admin-mongo-dev-7d9f7f7d46-rrj42                                 0/1       ContainerCreating   0          13h
default       mongod-0                                                         0/1       ContainerCreating   0          13h
kube-system   event-exporter-v0.2.1-5f5b89fcc8-xsvmg                           0/2       ContainerCreating   0          13h
kube-system   fluentd-gcp-scaler-7c5db745fc-vttc9                              0/1       ContainerCreating   0          13h
kube-system   fluentd-gcp-v3.1.0-pgwfx                                         0/2       ContainerCreating   0          16h
kube-system   fluentd-gcp-v3.1.0-sz8r8                                         0/2       ContainerCreating   0          16h
kube-system   heapster-v1.5.3-75486b456f-sj7k8                                 0/3       ContainerCreating   0          13h
kube-system   kube-dns-788979dc8f-99xvh                                        0/4       ContainerCreating   0          13h
kube-system   kube-dns-788979dc8f-9sz2b                                        0/4       ContainerCreating   0          13h
kube-system   kube-dns-autoscaler-79b4b844b9-6s8x2                             0/1       ContainerCreating   0          13h
kube-system   kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-bgb6   0/1       ContainerCreating   0          13h
kube-system   kube-proxy-gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz   0/1       ContainerCreating   0          13h
kube-system   kubernetes-dashboard-598d75cb96-6nhcd                            0/1       ContainerCreating   0          13h
kube-system   l7-default-backend-5d5b9874d5-8wk6h                              0/1       ContainerCreating   0          13h
kube-system   metrics-server-v0.2.1-7486f5bd67-fvddz                           0/2       ContainerCreating   0          13h
Name:           aditum-payment-7d966c494c-wpk2t
Namespace:      default
Node:           gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz/10.0.0.3
Start Time:     Thu, 27 Sep 2018 20:30:47 -0300
Labels:         io.kompose.service=aditum-payment
                pod-template-hash=3852270507
Annotations:    kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container aditum-payment
Status:         Pending
IP:
Controlled By:  ReplicaSet/aditum-payment-7d966c494c
Containers:
  aditum-payment:
    Container ID:
    Image:          gcr.io/aditumpay/aditumpaymentwebapi:latest
    Image ID:
    Port:           5000/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:  100m
    Environment:
      CONNECTIONSTRING:  <set to the key 'CONNECTIONSTRING' of config map 'aditum-payment-config'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qsc9k (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-qsc9k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qsc9k
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                  From                                                          Message
  ----     ------                  ----                 ----                                                          -------
  Warning  FailedCreatePodSandBox  3m (x1737 over 13h)  kubelet, gke-aditum-k8scluster--pool-nodes-dev-500ebc8b-m7bz  Failed create pod sandbox: rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

共有1个答案

阎弘
2023-03-14

很抱歉花了很长时间才回应。这是一个非常愚蠢的问题。在我到达google cloud支持后,我注意到我的NAT机器不能正常工作。PrivateAccess路由正经过我的NAT。谢谢大家的帮助。

 类似资料:
  • 我需要使用绑定挂载,因为我只是使用本地的。m2 maven存储库(目前),这是我能看到的让容器获得它的副本的唯一方法。 我在Eclipse中通过“Maven Clean”和“Maven Install”构建了一个kjar。在kjar(.jar)中,我有: META-INF包含kmodule.xml和manifest.mf,也是一个maven子文件夹,在子文件夹中有group-id和artifact

  • 但不知何故,豆荚卡在“容器创建”的状态,当我运行docker图像时,我看不到nginx图像被拉出。通常nginx图像没有那么大,所以现在必须已经拉了(15分钟)。kubectl description pods给出了pod沙箱创建失败的错误,kubernetes将重新创建它。 我搜索了关于这个问题的所有内容,并尝试了stackoverflow上的解决方案(重新启动以重新启动集群,搜索描述豆荚,新的

  • 我有一个关于Kubernetes环境的问题。我有K8s云,在我添加了一个持久卷分配给一个豆荚后,这个豆荚仍然处于“容器创建”状态。此PV已正确分配PVC。PVC与副本2一起位于两个外部GlusterFS服务器上。 你有什么想法可能是错的吗?我在哪里可以找到详细的日志?提前THX。 编辑:Gluster mount正确地安装在Master上,如果我手动添加任何文件,它将正确地复制到两个Gluster

  • 卸载calico后,kubectl-f calico.yaml无法在集群中创建新的豆荚。集群中的任何新吊舱都处于容器创建状态。Kubectl Description显示了以下错误: 警告失败CreatePodSandbox 2M kubelet,10.0.12.2创建吊舱沙箱失败:rpc错误:代码=未知desc=[未能为吊舱设置沙箱容器“F15743177FD70C5EABF70C60BE5B5B

  • 说我有以下几点: 并且在该操作创建器中,我想访问全局存储状态(所有还原器)。这样做是否更好: 或者这个:

  • 我正在从驻留在主节点上的映像创建一个 Pod。当我在主节点上创建一个容器以在工作线程节点上调度时,我得到的容器状态为错误图像取消 Kubectl描述pod详细信息: 如何解决这个问题。此外,我的问题是Kubernetes默认情况下会在worker节点上查找图像的存在吗?。谢谢