当前位置: 首页 > 工具软件 > K8tools > 使用案例 >

搭建k8s集群

充子航
2023-12-01

   配置yum源:

#清空yum源
[root@K8S-node2 ~]# cd /etc/yum.repos.d/
[root@K8S-node2 yum.repos.d]# ll
总用量 32
-rw-r--r--. 1 root root 1664 11月 23 2018 CentOS-Base.repo
-rw-r--r--. 1 root root 1309 11月 23 2018 CentOS-CR.repo
-rw-r--r--. 1 root root  649 11月 23 2018 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root  314 11月 23 2018 CentOS-fasttrack.repo
-rw-r--r--. 1 root root  630 11月 23 2018 CentOS-Media.repo
-rw-r--r--. 1 root root 1331 11月 23 2018 CentOS-Sources.repo
-rw-r--r--. 1 root root 5701 11月 23 2018 CentOS-Vault.repo
[root@K8S-node2 yum.repos.d]# mv * /opt/
[root@K8S-node2 yum.repos.d]# ll
总用量 0


#设置本地yum源
[root@K8S-node2 ~]# cp /etc/fstab /etc/fstab.bak
[root@K8S-node2 ~]# echo "/dev/sr0        /mnt    iso9660 defaults        0       0" >> /etc/fstab
[root@K8S-node1 yum.repos.d]# cat centos7.repo
[centos-source]
name=centos7
baseurl=file:///mnt
gpgcheck=0
enabled=1

#设置网络源
[root@K8S-master yum.repos.d]# yum -y install wget
[root@K8S-master yum.repos.d]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo


#生成yum缓存
[root@K8S-master yum.repos.d]# yum makecache


#安装epel源
[root@K8S-master yum.repos.d]# yum -y install epel-release.noarch



配置linux自动补全命令 

[root@K8S-master ~]# yum install bash-completion -y
[root@K8S-master ~]# source /usr/share/bash-completion/bash_completion

配置静态IP

[root@K8S-master ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33

修改网卡配置文件,主要修改内容项 BOOTPROTO="static" 、ONBOOT="yes" 、 IPADDR=192.168.1.63 、GATEWAY=192.168.1.1 、NETMASK=255.255.255.0 、 DNS1=114.114.114.114 以上 6 项。 

清空关闭防火墙

[root@K8S-master ~]# systemctl stop firewalld && systemctl disable firewalld 
[root@K8S-master ~]# iptables -F


关闭selinux
临时关闭:
[root@K8S-master ~]# setenforce 0

永久关闭:
[root@K8S-master ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

 配置主机映射关系

[root@K8S-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.31   master
192.168.31.32   node1
192.168.31.33   node2

配置免密登陆

修改内核参数

#修改内核参数
[root@K8S-master ~]# modprobe br_netfilter
[root@K8S-master ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  1 br_netfilter
[root@K8S-master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

[root@K8S-master ~]# modprobe br_netfilter
[root@K8S-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1


为什么要要开启 ip_forward
如果容器的宿主机上的 ip_forward 未打开,那么该宿主机上的容器则不能被其他宿主机访问
为什么要开启 net.bridge.bridge-nf-call-ip6tables
默认情况下,从容器发送到默认网桥的流量,并不会被转发到外部。要开启转发:
net.bridge.bridge-nf-call-ip6tables = 1

安装基础软件包

[root@K8S-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet

配置阿里云docker源

[root@K8S-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 配置本地docker源

 tee /etc/yum.repos.d/k8s-docker.repo << 'EOF'
[k8s-docker]
name=k8s-docker
baseurl=file:///opt/k8s-docker
enable=1
gpgcheck=0
EOF

 配置阿里云 Kubernetes yum 源

tee /etc/yum.repos.d/kubernetes.repo <<-'EOF'
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

配置时间同步

[root@K8S-master ~]# ntpdate cn.pool.ntp.org
29 Nov 20:55:49 ntpdate[14663]: adjust time server 84.16.67.12 offset 0.002239 sec

[root@K8S-master ~]# crontab -l
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@K8S-node2 ~]# service crond restart

开启ipvs

[root@K8S-master modules]# yum -y install ipvsadm ipset

[root@K8S-master modules]# cat >> /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- ip_vs_lc
modprobe -- ip_vs_wlc
modprobe -- ip_vs_lblc
modprobe -- ip_vs_lblcr
modprobe -- ip_vs_dh
modprobe -- ip_vs_nq
modprobe -- ip_vs_sed
modprobe -- ip_vs_ftp
modprobe -- nf_conntrack
[root@K8S-master modules]# chmod 755 /etc/sysconfig/modules/ipvs.modules  && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep ip_vs

安装docker-ce

[root@K8S-master modules]# yum install docker-ce -y
[root@K8S-master modules]# systemctl start docker && systemctl enable docker.service

[root@K8S-master modules]#  tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF


[root@K8S-master modules]# systemctl daemon-reload
[root@K8S-master modules]# systemctl restart docker


为什么要指定 native.cgroupdriver=systemd?
在安装 kubernetes 的过程中,会出现:
failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is 
different from docker cgroup driver: "systemd"
文件驱动默认由 systemd 改成 cgroupfs, 而我们安装的 docker 使用的文件驱动是 systemd, 造
成不一致, 导致镜像无法启动,可以使用命令 docker info 查看
Cgroup Driver: systemd

安装k8s组件

 

[root@K8S-master modules]# yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4

[root@K8S-master modules]#  systemctl enable kubelet


kubelet :运行在集群所有节点上,用于启动 Pod 和容器等对象的工具
kubeadm :用于初始化集群,启动集群的命令工具
kubectl :用于和集群通信的命令行,通过 kubectl 可以部署和管理应用,查看各种资源,创
建、删除和更新各种组件

初始化集群

查看需要的镜像:
[root@K8S-master modules]# kubeadm config images list

[root@K8S-master ~]#  kubeadm init --kubernetes-version=1.20.4 --apiserver-advertise-address=192.168.31.31  --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification  --service-cidr=10.96.0.0/12



【参数说明】:

apiserver-advertise-address:master节点主机本地ip

image-repository:镜像仓库地址

kubernetes-version:对应kubernetes版本,可以使用kubectl version查看

service-cidr:service网络,默认写10.96.0.0/12就可以

pod-network-cidr:pod网络,默认写10.244.0.0/16就可以



说明成功:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
需要在master上执行:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


创建网络:
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
加入集群:
kubeadm join 192.168.31.31:6443 --token wk91bg.xotsdn5p95g76h45 \
    --discovery-token-ca-cert-hash sha256:8996f85446df9b39d739aa98d0d1b73cd490782035441236640c5e923433080a



获取加入集群的命令:

kubeadm token create --print-join-command








 类似资料: