当前位置: 首页 > 工具软件 > Juju GUI > 使用案例 >

多节点OpenStack Charms 部署指南0.0.1.dev--43--使用juju将charmed k8s部署在openstack上

尹辰沛
2023-12-01

参考文档:

多节点OpenStack Charms 部署指南0.0.1.dev–41–配置openstack-base-73作为juju管理的openstack云

多节点OpenStack Charms 部署指南0.0.1.dev–42–部署bundle openstack-base-78,单网口openstack网络,及注意:千万不能以数字开头命名主机名

Charmed Kubernetes #679

Charmed Kubernetes on OpenStack

按照多节点OpenStack Charms 部署指南0.0.1.dev–42–部署bundle openstack-base-78,单网口openstack网络,及注意:千万不能以数字开头命名主机名,使用单以太网口服务器部署openstack-base-78后,再重新按照多节点OpenStack Charms 部署指南0.0.1.dev–41–配置openstack-base-73作为juju管理的openstack云进行配置,以使部署完毕的openstack-base-78作为可以用juju直接管理的私有云,使juju能直接操纵openstack上的vm,即实例。

在openstack中配置风味:

根据charmed-kubernetes-679中的bundle.yaml的内容,需要先生成以下几个风味:

  • root-disk=8G
  • mem=4G root-disk=8G
  • cores=2 mem=4G root-disk=16G
  • cores=4 mem=4G root-disk=16G

注:在多节点OpenStack Charms 部署指南0.0.1.dev–41–配置openstack-base-73作为juju管理的openstack云作为juju管理的openstack云中,生成控制器之前,需要先生成一个内存为3584M的风味:

  • mem=3584M root=20G

部署charmed k8s on openstack

Charmed Kubernetes 将在 OpenStack 上无缝运行。通过添加 openstack-integrator,您的集群也将能够直接使用 OpenStack 原生功能。

OpenStack integrator

openstack-integrator charm简化了在 OpenStack 上使用 Charmed Kubernetes 的过程。使用提供给 Juju 的凭据,它充当 Charmed Kubernetes 和底层云之间的代理,授予动态创建 Cinder 卷等权限。

先决条件

Openstack 集成要求 Octavia 在底层 OpenStack 云中可用,以支持 Kubernetes LoadBalancer 服务并支持为 Kubernetes API 创建负载均衡器。

安装:

使用 Juju bundle 安装 Charmed Kubernetes 时,您可以同时使用以下覆盖文件添加 openstack-integrator(从此处下载)

编辑openstack-overlay.yaml

vim openstack-overlay.yaml
description: Charmed Kubernetes overlay to add native OpenStack support.
applications:
  openstack-integrator:
    annotations:
      gui-x: "600"
      gui-y: "300"
    charm: cs:~containers/openstack-integrator
    num_units: 1
    trust: true
relations:
  - ['openstack-integrator', 'kubernetes-master:openstack']
  - ['openstack-integrator', 'kubernetes-worker:openstack']

增加k8s模型

juju add-model k8s

部署charmed-kubernetes

 juju deploy charmed-kubernetes --overlay /root/overlay/openstack-overlay.yaml --trust

部署完毕后显示状态:

juju status
Model  Controller                 Cloud/Region               Version  SLA          Timestamp
k8s    openstack-cloud-regionone  openstack-cloud/RegionOne  2.9.18   unsupported  17:08:46+08:00

App                    Version   Status  Scale  Charm                  Store       Channel   Rev  OS      Message
containerd             go1.13.8  active      5  containerd             charmstore  stable    178  ubuntu  Container runtime available
easyrsa                3.0.1     active      1  easyrsa                charmstore  stable    420  ubuntu  Certificate Authority connected.
etcd                   3.4.5     active      3  etcd                   charmstore  stable    634  ubuntu  Healthy with 3 known peers
flannel                0.11.0    active      5  flannel                charmstore  stable    597  ubuntu  Flannel subnet 10.1.31.1/24
kubeapi-load-balancer  1.18.0    active      1  kubeapi-load-balancer  charmstore  stable    844  ubuntu  Loadbalancer ready.
kubernetes-master      1.22.3    active      2  kubernetes-master      charmstore  stable   1078  ubuntu  Kubernetes master running.
kubernetes-worker      1.22.3    active      3  kubernetes-worker      charmstore  stable    816  ubuntu  Kubernetes worker running.
openstack-integrator   xena      active      1  openstack-integrator   charmstore  stable    182  ubuntu  Ready

Unit                      Workload  Agent  Machine  Public address  Ports             Message
easyrsa/0*                active    idle   0        192.168.0.107                     Certificate Authority connected.
etcd/0*                   active    idle   1        192.168.0.15    2379/tcp          Healthy with 3 known peers
etcd/1                    active    idle   2        192.168.0.175   2379/tcp          Healthy with 3 known peers
etcd/2                    active    idle   3        192.168.0.110   2379/tcp          Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        192.168.0.157   443/tcp,6443/tcp  Loadbalancer ready.
kubernetes-master/1*      active    idle   6        192.168.0.43    6443/tcp          Kubernetes master running.
  containerd/3            active    idle            192.168.0.43                      Container runtime available
  flannel/3               active    idle            192.168.0.43                      Flannel subnet 10.1.69.1/24
kubernetes-master/2       active    idle   11       192.168.0.40    6443/tcp          Kubernetes master running.
  containerd/4            active    idle            192.168.0.40                      Container runtime available
  flannel/4               active    idle            192.168.0.40                      Flannel subnet 10.1.4.1/24
kubernetes-worker/0       active    idle   7        192.168.0.166   80/tcp,443/tcp    Kubernetes worker running.
  containerd/2            active    idle            192.168.0.166                     Container runtime available
  flannel/2               active    idle            192.168.0.166                     Flannel subnet 10.1.16.1/24
kubernetes-worker/1*      active    idle   8        192.168.0.49    80/tcp,443/tcp    Kubernetes worker running.
  containerd/0*           active    idle            192.168.0.49                      Container runtime available
  flannel/0*              active    idle            192.168.0.49                      Flannel subnet 10.1.31.1/24
kubernetes-worker/2       active    idle   9        192.168.0.136   80/tcp,443/tcp    Kubernetes worker running.
  containerd/1            active    idle            192.168.0.136                     Container runtime available
  flannel/1               active    idle            192.168.0.136                     Flannel subnet 10.1.20.1/24
openstack-integrator/2*   active    idle   13       10.0.9.97                         Ready

Machine  State    DNS            Inst id                               Series  AZ    Message
0        started  192.168.0.107  93ae9def-432a-4f26-89ad-dba737cfb567  focal   nova  ACTIVE
1        started  192.168.0.15   f64b471d-76f7-4bd5-9eaf-417b27e19401  focal   nova  ACTIVE
2        started  192.168.0.175  306fc000-cfcd-4440-b475-e51bb017e0a2  focal   nova  ACTIVE
3        started  192.168.0.110  fe9c5a0c-e36d-4cd7-a829-2e07967ec55e  focal   nova  ACTIVE
4        started  192.168.0.157  f7840404-2b1e-47cc-84bb-8c3181a8253d  focal   nova  ACTIVE
6        started  192.168.0.43   f2c57ad7-6d46-4943-832f-9da343a7c211  focal   nova  ACTIVE
7        started  192.168.0.166  c2fa2e24-b21a-40f6-8c80-f01dca4043fd  focal   nova  ACTIVE
8        started  192.168.0.49   13ac8f35-9a8d-48e7-83b5-0aea3971ba4a  focal   nova  ACTIVE
9        started  192.168.0.136  baca1e8b-fdd4-4818-94b1-3c3ba9dea7d3  focal   nova  ACTIVE
11       started  192.168.0.40   52233035-1c94-4c06-89f0-f84cf5451319  focal   nova  ACTIVE
13       started  10.0.9.97      069f8ee3-2eb1-4e10-babc-0de25faedec0  focal   nova  ACTIVE

不过,在部署过程中,会有两个问题:

1 部署过程中,vm会吊死,需要手动删除unit和machine并重新部署unit。如本例中,会发现openstack-integrator重新部署了两次,因为openstack-integrator单元号现在是2。kubernetes-master重新部署了一次,因为kubernetes-master/1是第一个单元。

juju remove-unit kubernetes-master/0

juju remove-machine 5

juju remove-unit openstack-integrator/0
juju remove-machine 10

2 在部署过程中,kubernetes-master会显示Waiting for 7 kube-system pods to start,不要着急,等待就好,最后会显示Kubernetes master running.

 类似资料: