当前位置: 首页 > 知识库问答 >
问题:

Kubernetes(由Kubeadm安装)使用的Flanneld配置在哪里?

晋弘义
2023-03-14

库伯内特斯工作节点上的Flanneld有配置文件 /etc/sysconfig/flanneld它指向工作节点localhost的ETCD,它应该指向主节点的etcd URL。

这是否意味着Pod网络没有正确配置,或者Flannel与Kubernetes用户的配置文件不同?如果是,flanneld使用哪种配置?

此外,如果有关于Kubernetes如何与CNI互动的良好参考/资源,请建议。

在工作节点上,配置指向其自身,而不是主IP。

$ cat /etc/sysconfig/flanneld  

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional html" target="_blank">options that you want to pass
#FLANNEL_OPTIONS=""

工作节点已成功加入。

$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    25m       v1.8.5
node01    Ready     <none>    25m       v1.8.5
node02    Ready     <none>    25m       v1.8.5

工作线程节点上的 flannel.1 IF 配置了带有主节点的保存 CIDR,尽管配置不指向配置了 Flannel 的主节点。

$ ip addr
...
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:0d:f8:34 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.12/24 brd 192.168.99.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::6839:cd66:9352:2280/64 scope link 
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:2c:56:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:2c:56:b8 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:67:48:ae:ef brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 56:20:a1:4d:f0:d2 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::5420:a1ff:fe4d:f0d2/64 scope link 
       valid_lft forever preferred_lft forever

在工人上执行的步骤(除了 sudo yum 安装 kubelet kubeadm 法兰绒)是 kubeadm 连接,看起来成功了(尽管有一些错误消息)。

changed: [192.168.99.12] => {...
  "[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.",
  "[preflight] Running pre-flight checks",
  "[preflight] Starting the kubelet service",
  "[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
  "[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
  "[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
  "[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
  "[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
  "[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
  "[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
  "[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
  "[discovery] Requesting info from \"https://192.168.99.10:6443\" again to validate TLS against the pinned public key",
  "[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"192.168.99.10:6443\"",
  "[discovery] Successfully established connection with API Server \"192.168.99.10:6443\"",
  "[bootstrap] Detected server version: v1.8.5",
  "[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)",
  "",
  "Node join complete:",
  "* Certificate signing request sent to master and response",
  "  received.",
  "* Kubelet informed of new secure connection details.",
  "",
  "Run 'kubectl get nodes' on the master to see this machine join."

按照在虚拟盒子上使用 kubeadm 在 CentOS 7 中创建集群的方式安装了 Kubernetes 1.8.5。

  • kubeadm init--令牌=xyz还是kubeadm初始化--令牌xyz

共有1个答案

唐昊焜
2023-03-14

法兰配置存储在etcd中FLANNEL_ETCD_ENDPOINTS=“http://127.0.0.1:2379“参数定义etcd的位置,FLANNEL_etcd_PREFIX=”/atomic。io/network“定义数据存储在etcd中的位置。

因此,为了获得完全符合您的情况的法兰绒配置,我们需要从etcd获取这些信息:

etcdctl --endpoint=127.0.0.1:2379 get /atomic.io/network/config
{"Network":"10.2.0.0/16","Backend":{"Type":"vxlan"}}

此外,我们可以找到我们在集群中使用了多少子网:

etcdctl --endpoint=127.0.0.1:2379 ls /atomic.io/network/subnets
/atomic.io/network/subnets/10.2.41.0-24
/atomic.io/network/subnets/10.2.86.0-24

并检查其中任何一个的信息:

etcdctl --endpoint=127.0.0.1:2379 get /atomic.com/network/subnets/10.2.4.0-24
{"PublicIP":"10.0.0.16","BackendType":"vxlan","BackendData":{"VtepMAC":"45:e7:76:d5:1c:49"}}
 类似资料:
  • 我想在我的debian机器上安装Kubernetes: 查看google deb软件包档案,我只找到了“kubectl”的软件包,没有其他内容: https://packages.cloud.google.com/apt/dists/kubernetes-stretch/main/binary-amd64/Packages 与ubuntu xenial相比,很多软件包都不见了。有人能这么好,给我更

  • 我在Hetzner Cloud上安装了一个带有“kubeadm”的库伯内特斯集群。 安装成功后,我安装了带有Helm的入口控制器。 入口控制器服务的EXTERNAL-IP处于挂起状态。默认类型是LoadBalancer,据我所知,只有AWS、Google等云提供商才支持这种类型。。。 所以我将服务类型更改为NodePort。 我应该如何将外部DNS配置到我的服务? 我不想附加3。。。。端口,但让入

  • 主要记录 GitBook 的安装配置以及一些插件信息,当前使用的 GitBook 版本为 3.2.3。

  • 我为 Mac (10.11) 和视窗 10 安装了 < Li > < code > idea IC-2020 . 2 . 3 对于这两个人来说,这是第一次关于他们的启动过程,我选择并配置了自定义插件,例如Gradle,Git Github等...然后我导入了一些基于Gradle的项目,例如和。 关于Gradle指定了它的设置: < li>Gradle自定义主目录,即:6.7(因此没有包装器) 每个

  • 一、背景 今天采用10台异构的机器做测试,对500G的数据进行运算分析,业务比较简单,集群机器的结构如下: A:双核CPU×1、500G硬盘×1,内存2G×1(Slaver),5台 B:四核CPU×2、500G硬盘×2,内存4G×2(Slaver),4台 C:四核CPU×2、500G硬盘×2,内存4G×8(Master),1台 软件采用Hadoop 0.20.2,Linux操作系统。 二、过程 1

  • 系统环境需求及注意事项 Win平台使用DedeAMPZ快速安装