当前位置: 首页 > 工具软件 > Ubuntu Juju > 使用案例 >

ubuntu20.04下使用juju+maas环境部署k8s-3-处理Missing flannel resource

万俟超
2023-12-01

参考文档:
Kubernetes documentation

Charmed Kubernetes #679

使用 Graylog 和 Prometheus 监视 Kubernetes 集群

CNI with flannel

Flannel #558

接上节:

juju status --relations
Model  Controller       Cloud/Region    Version  SLA          Timestamp
k8s    maas-controller  mymaas/default  2.8.10   unsupported  17:19:05+08:00

App                    Version   Status   Scale  Charm                  Store  Channel  Rev  OS      Message
containerd             go1.13.8  active       5  containerd             local             0  ubuntu  Container runtime available
easyrsa                3.0.1     active       1  easyrsa                local             0  ubuntu  Certificate Authority connected.
etcd                   3.4.5     active       3  etcd                   local             0  ubuntu  Healthy with 3 known peers
flannel                          blocked      5  flannel                local             0  ubuntu  Missing flannel resource.

会发现flannel状态不对,为Missing flannel resource.,且没返回版本号。

开始以为是flannel的源在gw外,所以开始研读相关文档。

Flannel #558

描述
它是一个通用的覆盖网络,可用作现有软件定义网络解决方案的简单替代方案。

Flannel charm
Flannel 是一个虚拟网络,它为每个主机提供一个子网,用于容器运行时。

此 Charm 会将 flannel 部署为后台服务,并在实现 kubernetes-cni 接口的任何主体 Charm 上配置 CNI 以与 flannel 一起使用。

这种魅力与 Charmed Kubernetes 的组件一起维护。有关完整信息,请访问 Charmed Kubernetes 官方文档

在其中的flannel.py

@when_not('flannel.binaries.installed')
def install_flannel_binaries():
    ''' Unpack the Flannel binaries. '''
    try:
        resource_name = 'flannel-{}'.format(arch())
        archive = resource_get(resource_name)
    except Exception:
        message = 'Error fetching the flannel resource.'
        log(message)
        status.blocked(message)
        return
    if not archive:
        message = 'Missing flannel resource.'
        log(message)
        status.blocked(message)
        return
    filesize = os.stat(archive).st_size
    if filesize < 1000000:
        message = 'Incomplete flannel resource'
        log(message)
        status.blocked(message)
        return

看来没找到相关下载的资源造成的。
接着看文档:
在其中的的build-flannel-resources.sh中,发现如下段落:

FLANNEL_VERSION=${FLANNEL_VERSION:-"v0.11.0"}
ETCD_VERSION=${ETCD_VERSION:-"v2.3.7"}

ARCH=${ARCH:-"amd64 arm64 s390x"}

build_script_commit="$(git show --oneline -q)"
temp_dir="$(readlink -f build-flannel-resources.tmp)"
rm -rf "$temp_dir"
mkdir "$temp_dir"
(cd "$temp_dir"
  git clone https://github.com/coreos/flannel.git flannel \
    --branch "$FLANNEL_VERSION" \
    --depth 1

  git clone https://github.com/coreos/etcd.git etcd \
    --branch "$ETCD_VERSION" \
    --depth 1

这段是git clone 了flannel和etcd资源
git clone https://github.com/coreos/flannel.git,版本0.11.0
git clone https://github.com/coreos/etcd.git,版本2.3.7

开始想法是是直接下载,根据前文多节点OpenStack Charms 部署指南0.0.1.dev299–16–OpenStack基础架构高可用The easyrsa resource is missing. .中的办法:
wget https://github.com/flannel-io/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
juju attach flannel flannel=./flannel-v0.11.0-linux-amd64.tar.gz

output is:
ERROR failed to upload resource “flannel”: Put “https://10.0.0.155:17070/model/9fc2675f-3030-4102-83aa-6c76fd182926/applications/flannel/resources/flannel”: write tcp 10.0.0.3:56746->10.0.0.155:17070: write: connection reset by peer

看来这个办法有问题。

判断了下,既然是能下载下来,说明不是源的问题,估计是本地部署的时候,有了没有执行这个sh。

为了判断,干脆重新charm store部署了下。(其中kubernetes-master和easyrsa是 local部署。)

vim bundle.yaml

description: A highly-available, production-grade Kubernetes cluster.
series: focal
services:
  containerd:
    annotations:
      gui-x: '475'
      gui-y: '800'
    charm: cs:~containers/containerd-130
    resources: {}
  easyrsa:
    annotations:
      gui-x: '90'
      gui-y: '420'
    charm: /root/charmed-kubernetes-679/easyrsa
    constraints: root-disk=8G
    num_units: 1
    resources:
      easyrsa: 5
  etcd:
    annotations:
      gui-x: '800'
      gui-y: '420'
    charm: cs:~containers/etcd-594
    constraints: root-disk=8G
    num_units: 3
    options:
      channel: 3.4/stable
    resources:
      core: 0
      etcd: 3
      snapshot: 0
  flannel:
    annotations:
      gui-x: '475'
      gui-y: '605'
    charm: cs:~containers/flannel-558
    resources:
      flannel-amd64: 761
      flannel-arm64: 758
      flannel-s390x: 745
  kubeapi-load-balancer:
    annotations:
      gui-x: '450'
      gui-y: '250'
    charm: cs:~containers/kubeapi-load-balancer-798
    constraints: mem=4G root-disk=8G
    expose: true
    num_units: 1
    resources: {}
  kubernetes-master:
    annotations:
      gui-x: '800'
      gui-y: '850'
    charm: /root/charmed-kubernetes-679/kubernetes-master
    constraints: cores=2 mem=4G root-disk=16G
    num_units: 2
    options:
      channel: 1.21/stable
    resources:
      cdk-addons: 0
      core: 0
      kube-apiserver: 0
      kube-controller-manager: 0
      kube-proxy: 0
      kube-scheduler: 0
      kubectl: 0
  kubernetes-worker:
    annotations:
      gui-x: '90'
      gui-y: '850'
    charm: cs:~containers/kubernetes-worker-768
    constraints: cores=4 mem=4G root-disk=16G
    expose: true
    num_units: 3
    options:
      channel: 1.21/stable
    resources:
      cni-amd64: 797
      cni-arm64: 788
      cni-s390x: 800
      core: 0
      kube-proxy: 0
      kubectl: 0
      kubelet: 0
relations:
- - kubernetes-master:kube-api-endpoint
  - kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
  - kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
  - kubernetes-worker:kube-control
- - kubernetes-master:certificates
  - easyrsa:client
- - etcd:certificates
  - easyrsa:client
- - kubernetes-master:etcd
  - etcd:db
- - kubernetes-worker:certificates
  - easyrsa:client
- - kubernetes-worker:kube-api-endpoint
  - kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
  - easyrsa:client
- - flannel:etcd
  - etcd:db
- - flannel:cni
  - kubernetes-master:cni
- - flannel:cni
  - kubernetes-worker:cni
- - containerd:containerd
  - kubernetes-worker:container-runtime
- - containerd:containerd
  - kubernetes-master:container-runtime
juju deploy ./bundle.yaml 
juju status
Model  Controller       Cloud/Region    Version  SLA          Timestamp
k8s    maas-controller  mymaas/default  2.8.10   unsupported  16:49:34+08:00

App                    Version   Status   Scale  Charm                  Store       Channel  Rev  OS      Message
containerd             go1.13.8  active       5  containerd             charmstore           130  ubuntu  Container runtime available
easyrsa                3.0.1     active       1  easyrsa                local                  0  ubuntu  Certificate Authority connected.
etcd                   3.4.5     active       3  etcd                   charmstore           594  ubuntu  Healthy with 3 known peers
flannel                0.11.0    active       5  flannel                charmstore           558  ubuntu  Flannel subnet 10.1.47.1/24
kubeapi-load-balancer  1.18.0    active       1  kubeapi-load-balancer  charmstore           798  ubuntu  Loadbalancer ready.
kubernetes-master      1.21.1    waiting      2  kubernetes-master      local                  0  ubuntu  Waiting for 3 kube-system pods to start
kubernetes-worker      1.21.1    active       3  kubernetes-worker      charmstore           768  ubuntu  Kubernetes worker running.
rsyslog-forwarder-ha             unknown      0  rsyslog-forwarder-ha   charmstore            20  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports           Message
easyrsa/0*                active    idle   0        10.0.3.189                      Certificate Authority connected.
etcd/0*                   active    idle   1        10.0.3.200      2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   2        10.0.3.201      2379/tcp        Healthy with 3 known peers
etcd/2                    active    idle   3        10.0.3.204      2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        10.0.3.208      443/tcp         Loadbalancer ready.
kubernetes-master/0       waiting   idle   5        10.0.3.202      6443/tcp        Waiting for 3 kube-system pods to start
  containerd/4            active    idle            10.0.3.202                      Container runtime available
  flannel/4               active    idle            10.0.3.202                      Flannel subnet 10.1.22.1/24
kubernetes-master/1*      waiting   idle   6        10.0.3.207      6443/tcp        Waiting for 3 kube-system pods to start
  containerd/2            active    idle            10.0.3.207                      Container runtime available
  flannel/2               active    idle            10.0.3.207                      Flannel subnet 10.1.18.1/24
kubernetes-worker/0*      active    idle   7        10.0.3.203      80/tcp,443/tcp  Kubernetes worker running.
  containerd/0*           active    idle            10.0.3.203                      Container runtime available
  flannel/0*              active    idle            10.0.3.203                      Flannel subnet 10.1.47.1/24
kubernetes-worker/1       active    idle   8        10.0.3.206      80/tcp,443/tcp  Kubernetes worker running.
  containerd/3            active    idle            10.0.3.206                      Container runtime available
  flannel/3               active    idle            10.0.3.206                      Flannel subnet 10.1.4.1/24
kubernetes-worker/2       active    idle   9        10.0.3.205      80/tcp,443/tcp  Kubernetes worker running.
  containerd/1            active    idle            10.0.3.205                      Container runtime available
  flannel/1               active    idle            10.0.3.205                      Flannel subnet 10.1.69.1/24

Machine  State    DNS         Inst id       Series  AZ       Message
0        started  10.0.3.189  busy-raptor   focal   default  Deployed
1        started  10.0.3.200  crisp-swift   focal   default  Deployed
2        started  10.0.3.201  vital-tick    focal   default  Deployed
3        started  10.0.3.204  stable-dory   focal   default  Deployed
4        started  10.0.3.208  upward-ibex   focal   default  Deployed
5        started  10.0.3.202  ideal-oyster  focal   default  Deployed
6        started  10.0.3.207  safe-goat     focal   default  Deployed
7        started  10.0.3.203  glad-hen      focal   default  Deployed
8        started  10.0.3.206  cool-aphid    focal   default  Deployed
9        started  10.0.3.205  epic-moose    focal   default  Deployed

看来的确是本地部署产生的问题,下次如果有机会可以在本地部署后,执行下build-flannel-resources.sh试试。

 类似资料: