optional, for who installed k8s before
)kubeadm reset
yum remove kubelet kubeadm kubectl docker-ce
hwclock
make sure date time is synced for all nodes.
optional, if date time is not synced
)ntpdate -u ntp.api.bz;hwclock -w
crontab -e
Put following content
10 5 * * * root /usr/sbin/ntpdate -u ntp.api.bz;hwclock -w
sudo hostname
playground-1
sudo vi /ect/hosts
10.60.1.224 playground-1
10.60.1.225 playground-2
10.60.1.204 playground-3
sudo swapoff -a
sudo vi /etc/fstab
UUID=9a95c844-0bb9-472e-b743-2d0885f699a4 swap swap defaults 0 0
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
sudo yum install -y docker-ce kubelet kubeadm kubectl
sudo systemctl start kubelet.service
sudo systemctl start docker.service
sudo systemctl status kubelet.service
sudo systemctl status docker.service
sudo systemctl enable kubelet.service
sudo systemctl enable docker.service
Any stage of this section fails, please run **kubeadm reset** first, then fix the problem and proceed again
optional, for you are in china or you see following error
)[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0
> EOF
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.15.1
k8s.gcr.io/kube-controller-manager:v1.15.1
k8s.gcr.io/kube-scheduler:v1.15.1
k8s.gcr.io/kube-proxy:v1.15.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
version must match, e.g. as above v.1.15.1 & 3.1 & 3.3.10 & 1.3.1 accordingly
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.15.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.15.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.15.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.15.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker pull coredns/coredns:1.3.1
version must match, e.g. as above v.1.15.1 & 3.1 & 3.3.10 & 1.3.1 accordingly
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
Make sure it success
docker image list | grep k8s.gcr.io/
k8s.gcr.io/kube-controller-manager v1.15.1
k8s.gcr.io/kube-proxy v.1.15.1
k8s.gcr.io/kube-proxy v1.15.1
k8s.gcr.io/kube-scheduler v1.15.1
k8s.gcr.io/kube-apiserver v1.15.1
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
vi calico.yaml
name: CALICO_IPV4POOL_CIDR
value: “192.168.0.0/16”
specifying a pod network range matching that in calico.yaml!
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
You would see following output if you doing correct.
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (id−u):(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.60.1.225:6443 --token shgg3d.rhc4iv79eltzq1jz
–discovery-token-ca-cert-hash sha256:90c770a0600b77622b35df17bb63bba6374429aa3625adaa6c9a610e506bda98
Copy the following message from output for later node join
kubeadm join 10.60.1.225:6443 --token shgg3d.rhc4iv79eltzq1jz
–discovery-token-ca-cert-hash sha256:90c770a0600b77622b35df17bb63bba6374429aa3625adaa6c9a610e506bda98
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f rbac-kdd.yaml
kubectl apply -f calico.yaml
Make sure kubelet now is active
sudo systemctl status kubelet.service
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2019-07-28 01:18:40 CST; 35min ago
Docs: https://kubernetes.io/docs/
Paste the join command from master node initialization mentioned above.
Optional
)If you didn't save the join command from master node initialization or the join token expired. Follow steps below.
kubeadm token list
kubeadm token create
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
the IP below should be master node, use the generated token and cert hash
kubeadm join 10.60.1.225:6443 --token shgg3d.rhc4iv79eltzq1jz \
> --discovery-token-ca-cert-hash sha256:90c770a0600b77622b35df17bb63bba6374429aa3625adaa6c9a610e506bda98
kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
playground-1 NotReady 13s v1.15.1
playground-2 Ready 18s v1.15.1
playground-3 Ready master 12m v1.15.1
Install bash-completion
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
Apply bash-completion to kubectl
source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
I0727 17:58:44.656940 24019 token.go:146] [discovery] Failed to request cluster info, will try again: [Get https://10.60.1.225:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
Refer prerequisite section for node date time synchronization