Model Oriented Development Engineering - Kubernetes(k8s) 安装教程

单于阳
2023-12-01

Prerequisite on All Nodes

Reset Environment (optional, for who installed k8s before)


Rest kubeadm

kubeadm reset

Make sure no early version installed

yum remove kubelet kubeadm kubectl docker-ce

Node date time synchronization

Check node date time

hwclock

make sure date time is synced for all nodes.

Sync date time (optional, if date time is not synced)

ntpdate -u ntp.api.bz;hwclock -w
crontab -e 

Put following content

10 5 * * * root /usr/sbin/ntpdate -u ntp.api.bz;hwclock -w

Make sure all nodes have same hostname resolution

sudo hostname

playground-1

sudo vi /ect/hosts

10.60.1.224 playground-1
10.60.1.225 playground-2
10.60.1.204 playground-3

Disable swap

sudo swapoff -a
sudo vi  /etc/fstab

UUID=9a95c844-0bb9-472e-b743-2d0885f699a4 swap swap defaults 0 0

K8s Package Installation on All Nodes

Install docker and k8s

sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0

sudo yum install -y docker-ce kubelet kubeadm kubectl

Start kubelet & docker

sudo systemctl start kubelet.service
sudo systemctl start docker.service

Check kubelet & docker status

sudo systemctl status kubelet.service
sudo systemctl status docker.service

Enable kubelet & docker on server startup

sudo systemctl enable kubelet.service
sudo systemctl enable docker.service

K8s Cluster Initialization with kubeadm on Master Node

Any stage of this section fails, please run **kubeadm reset** first, then fix the problem and proceed again

Make k8s images files (optional, for you are in china or you see following error)


[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1

Install docker & k8s package source

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0
> EOF

sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Check kubeadm version

kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

Pull image from ali-cloud mirror

version must match, e.g. as above v.1.15.1 & 3.1 & 3.3.10 & 1.3.1 accordingly

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.15.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.15.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.15.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.15.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker pull coredns/coredns:1.3.1

Make local version of k8s image

version must match, e.g. as above v.1.15.1 & 3.1 & 3.3.10 & 1.3.1 accordingly

docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.3.10  k8s.gcr.io/etcd:3.3.10
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1

Make sure it success

docker image list | grep k8s.gcr.io/

k8s.gcr.io/kube-controller-manager v1.15.1
k8s.gcr.io/kube-proxy v.1.15.1
k8s.gcr.io/kube-proxy v1.15.1
k8s.gcr.io/kube-scheduler v1.15.1
k8s.gcr.io/kube-apiserver v1.15.1


Download the yaml files for the pod network

wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Look inside calico.yaml and find the network range, adjust if needed.

vi calico.yaml

name: CALICO_IPV4POOL_CIDR
value: “192.168.0.0/16”

Initialize our kubernetes cluster

specifying a pod network range matching that in calico.yaml!

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

You would see following output if you doing correct.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.60.1.225:6443 --token shgg3d.rhc4iv79eltzq1jz
–discovery-token-ca-cert-hash sha256:90c770a0600b77622b35df17bb63bba6374429aa3625adaa6c9a610e506bda98

Copy the following message from output for later node join

kubeadm join 10.60.1.225:6443 --token shgg3d.rhc4iv79eltzq1jz
–discovery-token-ca-cert-hash sha256:90c770a0600b77622b35df17bb63bba6374429aa3625adaa6c9a610e506bda98

Configure admin access to the API server

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Setup security and pod network

kubectl apply -f rbac-kdd.yaml
kubectl apply -f calico.yaml

Make sure kubelet now is active

sudo systemctl status kubelet.service

kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2019-07-28 01:18:40 CST; 35min ago
Docs: https://kubernetes.io/docs/

Node Join into K8s Cluster

Paste the join command from master node initialization mentioned above.

Join Node from scratch (Optional)


If you didn't save the join command from master node initialization or the join token expired. Follow steps below.

Generate token on master

kubeadm token list
kubeadm token create

Generate SSL certificate on master

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

Join cluster on worker node

the IP below should be master node, use the generated token and cert hash

kubeadm join 10.60.1.225:6443 --token shgg3d.rhc4iv79eltzq1jz \
>     --discovery-token-ca-cert-hash sha256:90c770a0600b77622b35df17bb63bba6374429aa3625adaa6c9a610e506bda98

Cluster Node Status Validation

kubectl get nodes --watch

NAME STATUS ROLES AGE VERSION
playground-1 NotReady 13s v1.15.1
playground-2 Ready 18s v1.15.1
playground-3 Ready master 12m v1.15.1

Admin Tools & AddOns

Kubectl autocompltion

Install bash-completion

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion

Apply bash-completion to kubectl

source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.

K8s Cheatsheet

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Common Error

Node date time is not synced

I0727 17:58:44.656940 24019 token.go:146] [discovery] Failed to request cluster info, will try again: [Get https://10.60.1.225:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
Refer prerequisite section for node date time synchronization

 类似资料: