本文参考kubernetes官网文章Installing Kubernetes on Linux with kubeadm在CentOS7.2使用Kubeadm部署Kuebernetes集群,解决了一些在按照该文档部署时遇到的问题。
操作系统版本
# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core)
内核版本
# uname -r 3.10.0-327.el7.x86_64
集群节点
192.168.120.122 kube-master 192.168.120.123 kube-agent1 192.168.120.124 kube-agent2 192.168.120.125 kube-agent3
即该集群包含一个控制节点和三个工作节点。
部署前的准备
配置可以访问google相关网站
这种部署方式使用的软件包由google相关源提供,因此集群节点必须能够访问外网,至于如何配置请自行解决。
关闭防火墙
# systemctl stop firewalld.service && systemctl disable firewalld.service
禁用SELinux
# setenforce 0 # sed -i.bak 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
配置yum源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
安装kubelet和kubeadm
在所有节点上安装以下软件包:
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni # systemctl enable docker && systemctl start docker # systemctl enable kubelet && systemctl start kubelet
然后设置内核参数:
# sysctl net.bridge.bridge-nf-call-iptables=1 # sysctl net.bridge.bridge-nf-call-ip6tables=1
初始化控制节点
# kubeadm init --pod-network-cidr=10.244.0.0/16
因为在该集群中将使用flannel搭建pod网络,因此必须添加–pod-network-cidr参数。
注意:初始化较慢,因为该过程会pull一些docker image。
该命令的输出如下:
Initializing your master... [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.4 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.122] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 1377.560339 seconds [apiclient] Waiting for at least one node to register [apiclient] First node has registered after 6.039626 seconds [token] Using token: 60bc68.e94800f3c5c4c2d5 [apiconfig] Created RBAC rules [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token <token> 192.168.120.122:6443
观察控制节点的docker image:
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io/google_containers/kube-apiserver-amd64 v1.6.4 4e3810a19a64 2 days ago 150.6 MB gcr.io/google_containers/kube-controller-manager-amd64 v1.6.4 0ea16a85ac34 2 days ago 132.8 MB gcr.io/google_containers/kube-proxy-amd64 v1.6.4 e073a55c288b 2 days ago 109.2 MB gcr.io/google_containers/kube-scheduler-amd64 v1.6.4 1fab9be555e1 2 days ago 76.75 MB gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 12 weeks ago 168.9 MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 12 months ago 746.9 kB
按照初始化命令的提示执行以下操作:
# cp /etc/kubernetes/admin.conf $HOME/ # chown $(id -u):$(id -g) $HOME/admin.conf # export KUBECONFIG=$HOME/admin.conf
隔离控制节点
# kubectl taint nodes --all node-role.kubernetes.io/master- node "kube-master" tainted
安装pod网络
# kubectl apply -f flannel/Documentation/kube-flannel-rbac.yml clusterrole "flannel" created clusterrolebinding "flannel" created # kubectl apply -f flannel/Documentation/kube-flannel.yml serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset "kube-flannel-ds" created
可以通过git clone flannel仓库:
# git clone https://github.com/coreos/flannel.git
添加工作节点
# kubeadm join --token <token> 192.168.120.122:6443
该操作输出如下:
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "192.168.120.122:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.122:6443" [discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.120.122:6443" [discovery] Successfully established connection with API Server "192.168.120.122:6443" [bootstrap] Detected server version: v1.6.4 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server, generating KubeConfig... [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
在控制节点观察集群状态
# kubectl get nodes NAME STATUS AGE VERSION kube-agent1 Ready 16m v1.6.3 kube-agent2 Ready 16m v1.6.3 kube-agent3 Ready 16m v1.6.3 kube-master Ready 37m v1.6.3 # kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-kube-master 1/1 Running 0 32m 192.168.120.122 kube-master kube-system kube-apiserver-kube-master 1/1 Running 7 32m 192.168.120.122 kube-master kube-system kube-controller-manager-kube-master 1/1 Running 0 32m 192.168.120.122 kube-master kube-system kube-dns-3913472980-3x9wh 3/3 Running 0 37m 10.244.0.2 kube-master kube-system kube-flannel-ds-1m4wz 2/2 Running 0 18m 192.168.120.122 kube-master kube-system kube-flannel-ds-3jwf5 2/2 Running 0 17m 192.168.120.123 kube-agent1 kube-system kube-flannel-ds-41qbs 2/2 Running 4 17m 192.168.120.125 kube-agent3 kube-system kube-flannel-ds-ssjct 2/2 Running 4 17m 192.168.120.124 kube-agent2 kube-system kube-proxy-0mmfc 1/1 Running 0 17m 192.168.120.124 kube-agent2 kube-system kube-proxy-23vwr 1/1 Running 0 17m 192.168.120.125 kube-agent3 kube-system kube-proxy-5q8vq 1/1 Running 0 17m 192.168.120.123 kube-agent1 kube-system kube-proxy-8srwn 1/1 Running 0 37m 192.168.120.122 kube-master kube-system kube-scheduler-kube-master 1/1 Running 0 32m 192.168.120.122 kube-master
至此,完成Kubernetes集群的部署。
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持小牛知识库。
Kubernetes 一键部署脚本(使用 docker 运行时) # on master git clone https://github.com/feiskyer/ops cd ops kubernetes/install-kubernetes.sh # 记住控制台输出的 TOEKN 和 MASTER 地址,在其他 Node 安装时会用到 # on node git clone https://
本文档最初是基于kubenetes1.6版本编写的,对于kuberentes1.8及以上版本同样适用,只是个别位置有稍许变动,变动的地方我将特别注明版本要求。 本系列文档介绍使用二进制部署 kubernetes 集群的所有步骤,而不是使用 kubeadm 等自动化方式来部署集群,同时开启了集群的TLS安全认证,该安装步骤适用于所有bare metal环境、on-premise环境和公有云环境。 如
本系列文档介绍使用二进制部署最新 kubernetes v1.6.2 集群的所有步骤,而不是使用 kubeadm 等自动化方式来部署集群。 在部署的过程中,将详细列出各组件的启动参数,它们的含义和可能遇到的问题。 部署完成后,你将理解系统各组件的交互原理,进而能快速解决实际问题。 所以本文档主要适合于那些有一定 kubernetes 基础,想通过一步步部署的方式来学习和了解系统配置、运行原理的人。
本系列文档介绍使用二进制部署最新 kubernetes v1.6.2 集群的所有步骤,而不是使用 kubeadm 等自动化方式来部署集群。
本文主要描述了如何在标准的 Kubernetes 集群上通过 TiDB Operator 部署 TiDB 集群。 前置条件 TiDB Operator 部署完成。 部署 TiDB 集群 在部署 TiDB 集群之前,需要先配置 TiDB 集群。请参阅在 Kubernetes 中配置 TiDB 集群。 配置 TiDB 集群后,请按照以下步骤部署 TiDB 集群: 创建 Namespace: kubec
我假设上述证书的路径是主机上的路径,python脚本将从中获取文件,然后进行YAML构建? 测试呼叫3: 测试呼叫4: