当前位置: 首页 > 知识库问答 >
问题:

kubeadm init失败:x509:由未知机构签署的证书

包谭三
2023-03-14

在https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/之后,试图在苹果电脑上使用流浪者来设置库伯内特斯。通过安西珀剧本步骤:

 - name: Initialize the Kubernetes cluster using kubeadm
    command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16

我收到错误:

致命:[k8s-master]:失败!=

因此,我尝试手动运行Kubeadm init命令:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16  --ignore-preflight-errors all
I0422 08:51:06.815553    6537 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: x509: certificate signed by unknown authority
I0422 08:51:06.815587    6537 version.go:97] falling back to the local client version: v1.14.1

我用--ignore preflight errors all尝试了相同的命令

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10"  --node-name k8s-master --pod-network-cidr=192.168.0.0/16  --ignore-preflight-errors all
I0422 08:51:35.741958    6809 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: x509: certificate signed by unknown authority
I0422 08:51:35.742030    6809 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
    [WARNING ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: x509: certificate signed by unknown authority
, error: exit status 1
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
---
- hosts: all
  become: true
  tasks:
  - name: Install packages that allow apt to be used over HTTPS
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg-agent
      - software-properties-common

  - name: Add an apt signing key for Docker
    apt_key:
      url: https://download.docker.com/linux/ubuntu/gpg
      state: present

  - name: Add apt repository for stable version
    apt_repository:
      repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
      state: present

  - name: Install docker and its dependecies
    apt:
      name: "{{ packages }}"
      state: present
      update_cache: yes
    vars:
      packages:
      - docker-ce
      - docker-ce-cli
      - containerd.io
    notify:
      - docker status

  - name: Add vagrant user to docker group
    user:
      name: vagrant
      group: docker
/Initialize
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.10 192.168.50.10]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

根据宝贵的建议,我尝试了命令:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16  --kubernetes-version="v1.14.1" --ignore-preflight-errors all --cert-dir=/etc/ssl/cert

但得到错误响应:

[初始化]使用 Kubernetes 版本: v1.14.1 [预检] 运行飞行前检查 [警告文件可用--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml 已经存在 [警告文件可用--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml 已经存在 [警告文件可用--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml.yaml已存在 [警告文件可用--等-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml 已存在 [警告是多克系统检查]:检测到“cgroupfs”作为 Docker cgroup 驱动程序。推荐的驱动程序是“系统化”。请按照 https://kubernetes.io/docs/setup/cri/ [警告端口-10250]中的指南进行操作:端口 10250 正在使用 [预检] 正在使用中 拉取设置 Kubernetes 集群所需的映像 [印前检查] 这可能需要一两分钟,具体取决于您的互联网连接速度 [预检] 您也可以事先使用'kubeadm 配置映像拉取' [警告映像拉取]执行以下操作:无法 k8s.gcr.io/kube-apiserver:v1.14.1 拉取映像: 输出: 来自守护程序的错误响应: 获取 html" target="_blank">https://k8s.gcr.io/v2/: x509: 由未知颁发机构签名的证书 , 错误: 退出状态 1 [警告图像拉取]: 无法拉取映像 k8s.gcr.io/kube-controller-manager:v1.14.1: 输出: 来自守护程序的错误响应: 获取 https://k8s.gcr.io/v2/: x509: 由未知颁发机构签名的证书 , 错误: 退出状态 1 [警告 ImagePull]: 无法拉取映像 k8s.gcr.io/kube-scheduler:v1.14.1: 输出: 来自守护程序的错误响应: 获取 https://k8s.gcr.io/v2/: x509: 由未知颁发机构签名的证书 , 错误: 退出状态 1 [警告图像拉取]:无法拉取映像 k8s.gcr.io/kube-proxy:v1.14.1: 输出: 来自守护程序的错误响应: 获取 https://k8s.gcr.io/v2/: x509: 证书由未知颁发机构签名 , 错误: 退出状态 1 [警告图像拉取]:无法拉取映像 k8s.gcr.io/pause:3.1:输出:来自守护程序的错误响应:获取 https://k8s.gcr.io/v2/:x509:由未知颁发机构签名的证书 ,错误:退出状态 1 [警告图像拉]: 无法拉取映像 k8s.gcr.io/etcd:3.3.10: 输出: 来自守护程序的错误响应: 获取 https://k8s.gcr.io/v2/: x509: 由未知颁发机构签名的证书 , 错误: 退出状态 1 [警告图像拉取]:无法拉取映像 k8s.gcr.io/coredns:1.3.1: 输出: 来自守护程序的错误响应: 获取 https://k8s.gcr.io/v2/: x509: 由未知颁发机构签名的证书 , 错误: 退出状态 1 [kubelet-start] 将带有标志的 kubelet 环境文件写入文件 “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] 将 kubelet 配置写入文件 “/var/lib/kubelet/config.yaml” [kubelet-start] 激活 kubelet 服务 [证书] 使用 certificateDir 文件夹 “/etc/ssl/cert” [证书] 生成 “ca” 证书和密钥 [证书] 生成“apiserver” 证书和密钥 [证书] apiserver 服务证书是为 DNS 名称 [k8s-主 kubernetes kubernetes.默认 kubernetes.default.svc.cluster.local] 和 IP [10.96.0.1 192.168.50.10 192.168.50.10 192.168.50.10] [证书] 生成 “apiserver-kubelet-client”证书和密钥 [证书] 生成“前端代理 ca”证书和密钥 [证书] 生成“前端代理客户端”证书和密钥 [证书] 生成“etcd/ca”证书和密钥 [证书] 生成“apiserver-etcd-client” 证书和密钥 [证书] 生成“etcd/服务器”证书,并且密钥 [证书] 等/服务器服务证书针对 DNS 名称 [k8s-主本地主机] 和 IP [192.168.50.10 127.0.0.1 ::1] [证书] 生成“etcd/peer”证书和密钥 [证书] etcd/对等服务证书已针对 DNS 名称 [k8s-主本地主机] 和 IP [192.168.50.10 127.0.0.1 ::1] [证书] 生成“etcd/健康检查客户端”证书和密钥 [证书] 生成“sa” 密钥和公钥 [kubeconfig] 使用 kubeconfig 文件夹 “/etc/kubernetes” 错误执行阶段 kubeconfig/admin:kubeconfig 文件 “/etc/kubernetes/admin.conf” 已经存在,但已获得错误的 CA 证书

命令:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16  --kubernetes-version="v1.14.1" --ignore-preflight-errors all --cert-dir=/etc/kubernetes/pki

错误跟踪:

Unfortunately, an error has occurred:
    timed out waiting for the condition

This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

此外:

root@k8s-master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-04-24 00:13:07 UTC; 9min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 9746 (kubelet)
    Tasks: 16
   Memory: 27.7M
      CPU: 9.026s
   CGroup: /system.slice/kubelet.service
           └─9746 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1

Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.652197    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.711938    9746 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://192.168.50.10:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/k8s-master?timeout=10s: dial tcp 192.168.50.10:6443: connect: connection refused
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.752613    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.818002    9746 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://192.168.50.10:6443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.50.10:6443: connect: connection refused
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.859028    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:19 k8s-master kubelet[9746]: E0424 00:22:19.960182    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.018188    9746 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.50.10:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.50.10:6443: connect: connection refused
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.061118    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.169412    9746 kubelet.go:2244] node "k8s-master" not found
Apr 24 00:22:20 k8s-master kubelet[9746]: E0424 00:22:20.250762    9746 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://192.168.50.10:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.50.10:6443: connect: connection refused
root@k8s-master:~#

查看所有docker容器:

root@k8s-master:~# docker ps -a
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS                       PORTS               NAMES
a22812e3c702        20a2d7035165                         "/usr/local/bin/kube…"   4 minutes ago       Up 4 minutes                                     k8s_kube-proxy_kube-proxy-t7nq9_kube-system_20f8d57d-6628-11e9-b099-080027ee87c4_0
b2a89f8418bb        k8s.gcr.io/pause:3.1                 "/pause"                 4 minutes ago       Up 4 minutes                                     k8s_POD_kube-proxy-t7nq9_kube-system_20f8d57d-6628-11e9-b099-080027ee87c4_0
6c327b9d36f2        cfaa4ad74c37                         "kube-apiserver --ad…"   5 minutes ago       Up 5 minutes                                     k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_0260f2060ab76fc71c634c4499054fe6_1
a1f1b3396810        k8s.gcr.io/etcd                      "etcd --advertise-cl…"   5 minutes ago       Up 5 minutes                                     k8s_etcd_etcd-k8s-master_kube-system_64388d0f4801f9b4aa01c8b7505258c9_0
0a3619df6a61        k8s.gcr.io/kube-controller-manager   "kube-controller-man…"   5 minutes ago       Up 5 minutes                                     k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_07bbd1f39b3ac969cc18015bbdce8871_0
ffb435b6adfe        k8s.gcr.io/kube-apiserver            "kube-apiserver --ad…"   5 minutes ago       Exited (255) 5 minutes ago                       k8s_kube-apiserver_kube-apiserver-k8s-master_kube-system_0260f2060ab76fc71c634c4499054fe6_0
ffb463d4cbc6        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_etcd-k8s-master_kube-system_64388d0f4801f9b4aa01c8b7505258c9_0
a9672f233952        k8s.gcr.io/kube-scheduler            "kube-scheduler --bi…"   5 minutes ago       Up 5 minutes                                     k8s_kube-scheduler_kube-scheduler-k8s-master_kube-system_f44110a0ca540009109bfc32a7eb0baa_0
2bc0ab68870b        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_kube-controller-manager-k8s-master_kube-system_07bbd1f39b3ac969cc18015bbdce8871_0
667ae6988f2b        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_kube-apiserver-k8s-master_kube-system_0260f2060ab76fc71c634c4499054fe6_0
b4e6c37f5300        k8s.gcr.io/pause:3.1                 "/pause"                 5 minutes ago       Up 5 minutes                                     k8s_POD_kube-scheduler-k8s-master_kube-system_f44110a0ca540009109bfc32a7eb0baa_0

共有3个答案

巩子实
2023-03-14

因此,至于“kubeadm init失败:x509:由未知机构签名的证书”,尽管我真的很感激所有有价值的帮助,这帮助了很多,x509证书问题是通过Ansible剧本“kubernetes-set-setup/master-playbook.yml”中的以下内容解决的:

  - name: copy pem file
    copy: src=BCPSG.pem dest=/etc/ssl/certs

  - name: Update cert index
    shell: /usr/sbin/update-ca-certificates 

其中BCPSG。pm是我复制到Vagrantfile所在目录的证书,该目录是@kubernetes安装目录。回指https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/

伏砚
2023-03-14

尝试删除$HOME/。kube目录,在kubeadminit之后再次发出以下命令:
mkdir-p$HOME/。kube
sudo cp-i/etc/kubernetes/admin。conf$HOME/。kube/config
sudo-chown$(id-u):$(id-g)$HOME/.kube/conig

彭星津
2023-03-14

从init命令中删除以下参数

--节点名 k8s-主节点

包括以下参数以部署所需的 kubernetes 版本

--kubernetes-version v1.14.1
 类似资料:
  • 我正在配置一个Kubernetes集群,在CoreOS中有2个节点,如中所述https://coreos.com/kubernetes/docs/latest/getting-started.html没有法兰绒。两台服务器都在同一个网络中。 但在worker中运行kubelet时,我得到了:x509:未知授权机构签署的证书(可能是因为尝试验证候选授权机构证书“kube ca”时出现了“crypto

  • 我试图从Web请求数据的一些基本示例,但是所有对不同主机的请求都会导致SSL错误:。注意:我不支持代理,也没有发生任何形式的证书拦截,因为使用curl或浏览器没有问题。 我目前使用的代码示例是: 编辑:代码运行在Arch linux内核4.9.37-1-lts上。 编辑2:显然在我的系统上的版本之间有差异,通过(重新)移动证书并重新手动安装包,问题得到了解决。

  • 我正在使用Go包来查询pzug。应用程序的容器化如下: 但是,我得到以下错误: 我已经尝试过这里建议的x509证书,该证书由未知的权威机构签署,但运气不佳。有什么想法吗?

  • 我正在尝试设置一个私有docker注册表,由一个反向nginx代理保护,该代理通过客户端证书验证用户。 我得到的错误是: x509:由未知机构签署的证书 根据文档,您应该能够将证书添加到/etc/docker/certs中。我已经这么做了。Docker似乎看到了证书的位置: EBU[0015]呼叫POST/v1。24/图像/创建?fromImage=docker。矮人。组织/名册 我还尝试从myd

  • 我正在使用和创建。我正在使用该公司的VPN。 通过kubectl create-f./rc/mongo-rc.yaml命令创建了RC。 使用kubectl describe pod mongo-5zttk命令时出现以下kubernetes事件: 当我尝试使用访问网址时: 我可以成功地从泊坞中心注册表中提取 映像。 环境信息: minikube版本:v1.14.1 kubectl 客户端版本:v1.

  • 在我的Go应用程序中,我打电话到 这取决于环境,在本地将在http上,而在生产环境中,它将在 当我使用postman在https上测试生产路由时,没有问题,路由工作正常。 但是当我从WS运行它时,我得到: 这是我的代码: 我能做些什么来解决这个问题?