当前位置: 首页 > 工具软件 > kubectl-trace > 使用案例 >

Centos7国内环境下安装kubeadm、kubelet、kubectl并建立k8s集群、安装gitlab,测试spring boot 项目的CICD

能可人
2023-12-01

一、建立k8s集群

1、官网安装:

  1. 安装kubeadm
  2. 安装k8s集群安装k8s高可用集群

2、开始

1.防火墙

关闭防火墙:

 systemctl stop firewalld.service.

开启防火墙:

systemctl start firewalld.service

关闭开机启动:

systemctl disable firewalld.service
  1. 关闭selinux
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  1. sudo su 切换到root
  2. 修改主机名称 hostnamectl set-hostname xxxxx
  3. 关闭 swap
swapoff -a
vim /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sun Mar 13 12:51:19 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=f724164d-a1bc-412e-b119-fb07aab95643 /boot                   xfs     defaults        0 0
/dev/mapper/cl-home     /home                   xfs     defaults        0 0
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

将 带有swap那行注释掉

6.修改本机名和添加其它节点的主机名

  vim /etc/hosts 

添加如下内容

 192.168.x.x k8s-master-1
 192.168.x.x k8s-node-1

7.修改内核参数和模块
转发 IPv4 并让 iptables 看到桥接流量
通过运行 lsmod | grep br_netfilter 来验证 br_netfilter 模块是否已加载。

若要显式加载此模块,请运行 sudo modprobe br_netfilter。

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

为了让 Linux 节点的 iptables 能够正确查看桥接流量,请确认 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 设置为 1。例如:

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

3、安装 容器运行时

  • 安装docker

官网安装
设置 Docker 国内镜像,并设置cgroupDriver
1.安装docker

yum update
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6 docker-compose-plugin

2.换成国内docker仓库

mkdir -p /etc/docker 
touch /etc/docker/daemon.json
vim /etc/docker/daemon.json

添加如下内容

 {
   "exec-opts":["native.cgroupdriver=systemd"],
   "registry-mirrors": ["https://2vgbfb0x.mirror.aliyuncs.com"]
 }
  1. Docker服务的重启服务命令
systemctl restart docker

4.设置开机启动

systemctl enable docker

测试

docker run --name nginx-test -p 4000:80 -d nginx

5.卸载docker

  1. 查询docker安装包
yum list installed | grep docker
  1. 删除安装包
yum remove docker* -y
  1. 删除镜像/容器等
rm -rf /var/lib/docker
  • 安装containerd (可选)
    1.安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2

2.添加yum源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装containerd

yum install containerd -y

4.生成配置文件

containerd config default > /etc/containerd/config.toml

5.替换 containerd 默认的 sand_box 镜像,编辑 /etc/containerd/config.toml

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"

6.启动服务

systemctl restart containerd && systemctl enable containerd

7.配置 systemd cgroup 驱动程序
结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
  • CRI-O

本节包含安装 CRI-O 作为容器运行时的必要步骤。

要安装 CRI-O,请按照 CRI-O 安装说明执行操作。

To install on the following operating systems, set the environment variable $OS as the appropriate field in the following table:

Operating system $OS
Centos 8 CentOS_8
Centos 8 Stream CentOS_8_Stream
Centos 7 CentOS_7
And then run the following as root:

curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_7/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:1.17.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.17/CentOS_7/devel:kubic:libcontainers:stable:cri-o:1.17.repo
yum install cri-o -y

Note: as of 1.24.0, the cri-o package no longer depends on containernetworking-plugins package. Removing this dependency allows users to install their own CNI plugins without having to remove files first. If users want to use the previously provided CNI plugins, they should also run:

yum install containernetworking-plugins
  • cgroup 驱动程序

CRI-O 默认使用 systemd cgroup 驱动程序,这对你来说可能工作得很好。 要切换到 cgroupfs cgroup 驱动程序,请编辑 /etc/crio/crio.conf 或在 /etc/crio/crio.conf.d/02-cgroup-manager.conf 中放置一个插入式配置,例如:

[crio.runtime]
conmon_cgroup = "pod"
cgroup_manager = "cgroupfs"

你还应该注意到 conmon_cgroup 被更改,当使用 CRI-O 和 cgroupfs 时,必须将其设置为值 pod。 通常需要保持 kubelet 的 cgroup 驱动配置(通常通过 kubeadm 完成)和 CRI-O 同步。

对于 CRI-O,CRI 套接字默认为 /var/run/crio/crio.sock。

4、安装docker-compose

官网安装

在非root权限时出现的问题

docker-compose:找不到命令

解决方案

sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

5、国内yum源安装kubectl、kubelet和kubeadm

1.设置国内阿里源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
	[kubernetes]
	name=Kubernetes
	baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
	enabled=1
	gpgcheck=0
	repo_gpgcheck=0
	gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

	EOF

2.开始安装

yum install -y kubelet kubeadm kubectl 

ps :需要安装指定版本时
如下

yum install -y kubelet-1.23.4 kubeadm-1.23.4 kubectl-1.23.4

3.设置开机启动

systemctl enable kubelet && systemctl start kubelet

6、master 节点执行初始化

1.生成初始化文件

mkdir k8s && cd k8s && kubeadm config print init-defaults > kubeadm-config.yaml

2.修改配置文件

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.xx.xx (修改成master节点IP)
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master-1 (之前/etc/hosts里设置节点的别名)
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers (改成国内源)
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}

  1. 预先拉取所需镜像
kubeadm config images pull --config=kubeadm-config.yaml
  1. 初始化
    加上 tee kubeadm-init.log,方便后续查看 token 和初始化信息
 kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log

按照提示,root 身份简单设置

echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> /etc/profile

启动生效

source /etc/profile

报错:

修改后启动仍然报错, node_container_manager_linux.go:61] 
"Failed to create cgroup" err="Cannot set property TasksAccounting,
 or unknown property." cgroupName=[kubepods]

需要升级systemd

yum update systemd

搭建k8s 集群报错

[root@master ~]# kubeadm init --kubernetes-version=v1.19.0 --pod-network-cidr=10.244.0.0/16
W0302 02:18:41.583703   32386 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.0
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决方法
执行下面的脚本

echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

7、master节点安装pod网络

  1. 获取 kube-flannel.yml(最新版)
 curl  -o  kube-flannel.yml  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

旧版本0.17.0版本如下

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.16.3
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.16.3
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
  1. 把yml文件中的所有的quay.io改为quay.mirrors.ustc.edu.cn
 sed  -i  's/quay.io/quay.mirrors.ustc.edu.cn/g'   kube-flannel.yml
  1. 生成 flannel 插件pod
kubectl apply -f kube-flannel.yml

4.确认所有的Pod都处于Running状态

kubectl get pod -n kube-system

5.启动不正常
情况1(0.17.0版本)

kube-flannel-ds-s4qgt                  0/1     CrashLoopBackOff   3 (26s ago)   2m10s

查看日志

kubectl -n kube-system logs kube-flannel-ds-s4qgt

报错如下:
Error registering network: failed to acquire lease: node “k8s-master-1” pod cidr not assigned

原因:部署flannel网络插件时发现flannel一直处于CrashLoopBackOff状态,查看日志提示没有分配cidr
解决:

vim /etc/kubernetes/manifests/kube-controller-manager.yaml

增加参数:

--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16

重启kubelet

systemctl restart kubelet

正常了

情况2(0.19.0版本)

[root@k8s-master-1 k8s]# kubectl get pod -n kube-system --watch
NAME                                   READY   STATUS              RESTARTS   AGE
coredns-6d8c4cb4d-4l829                0/1     ContainerCreating   0          36m
coredns-6d8c4cb4d-z2xwn                0/1     ContainerCreating   0          36m

查看

kubectl describe pod coredns-6d8c4cb4d-4l829 -n kube-system

如下错误

Normal SandboxChanged 8m47s (x110 over 13m) kubelet Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 3m48s (x205 over 13m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “9a629321072f92d6fbc2a07765e4852409f82699350b8eeae7d89fbdd6ff1486” network for pod “coredns-6d8c4cb4d-4l829”: networkPlugin cni failed to set up pod “coredns-6d8c4cb4d-4l829_kube-system” network: open /run/flannel/subnet.env: no such file or directory

查看是否有 /run/flannel/subnet.env 这个文件 没有则增加

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

8、worker节点join

  1. 每一个节点服务器也和 master 主节点一样安装 Docker、kubectl、kubelet和kubeadm
  2. 如果master 重新init,则work节点join之前先执行 kubeadm reset
  3. 按照 master 初始化的输出提示加入集群
 kubeadm join 192.168.0.141:6443 --token abcdef.0123456789abcdef \
 		--discovery-token-ca-cert-hash sha256:57df376d612009f381bd3f3835464578666536080c6f779cffcf8bc90af10930 

如果没有记住刚才的 token , master 主机 # cat kubeadm-init.log 可以找到
或者这样

 kubeadm token list

如果超过 24 小时没有 join ,token 过期,需要在 master 重新获取 token

kubeadm token create 8mfiss.yvbnl8m319ysiflh

获取ca证书sha256编码hash值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

节点加入集群

 kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538  192.168.x.x:6443 --skip-preflight-checks

4 .验证node和 Pod状态,
全部是ready

kubectl get nodes

全部是Running

kubectl get pods --all-namespaces

问题: K8S节点NOT READY状态,

journalctl -f -u kubelet

错误信息:“Container runtime network not ready” networkReady=“NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized”

描述:我发现 /opt/cni/bin 目录下缺少很多可执行文件,处理方式是重新安装kubernetes-cni
解决:当前节点重新安装kubernetes-cni

yum install -y kubernetes-cni

重新init集群

问题:kubeadm init 报错 ”unknown service runtime.v1alpha2.RuntimeService”
之前安装了containerd 没删干净

rm /etc/containerd/config.toml

问题: Error from server: Get “https://192.168.31.142:10250/containerLogs/kube-system/kube-flannel-ds-mjjsd/kube-flannel”: dial tcp 192.168.31.142:10250: connect: connection refused
查看当前节点的docker是否正常运行

8、部署dashboard

dashboard官方仓库

kubectl命令汇总

1.删掉节点

kubectl delete node xxx(节点名称)

二、建立gitlab仓库

官方多种安装方式

1、使用docker-compose来安装

  • 设置卷位置
    对于 Linux 用户,将路径设置为 /srv/gitlab:
sudo su
export GITLAB_HOME=/srv/gitlab

不设置会出现下面警告

WARNING: The GITLAB_HOME variable is not set. Defaulting to a blank string.
  • 新建docker-compose.yml文件
version: '3.6'
services:
 web:
  image: 'gitlab-jh.tencentcloudcr.com/omnibus/gitlab-jh:latest'
  restart: always
  hostname: 'xxx.xxx.xxx.xxx' #主机IP
  environment:
   GITLAB_OMNIBUS_CONFIG: |
           external_url 'http://主机IP:8929'
           gitlab_rails['gitlab_shell_ssh_port'] = 2224
    # Add any other gitlab.rb configuration here, each on its own line
  ports:
 - '8929:8929'
 - '443:443'
 - '2224:22'
  volumes:
 - '$GITLAB_HOME/config:/etc/gitlab'
 - '$GITLAB_HOME/logs:/var/log/gitlab'
 - '$GITLAB_HOME/data:/var/opt/gitlab'
  shm_size: '256m'
  • 等待一段时间
    中间可以通过以下命令追踪
sudo docker logs -f gitlab

访问极狐GitLab URL,并使用用户名 root 和来自以下命令的密码登录:

sudo docker exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_password

1. 修改界面语言

	右上角头像——>preferences——>Localization

2. 添加普通用户

3. 官方安装[runner]版本(gitlab-runner 14.6.0)(https://docs.gitlab.com/runner/install/)的方法

 通过docker方式安装runner
 1.使用系统的本地卷挂载来启动容器
    docker run -d --name gitlab-runner --restart always \
 	-v /srv/gitlab-runner/config:/etc/gitlab-runner \
 	-v /var/run/docker.sock:/var/run/docker.sock \
 	gitlab/gitlab-runner:latest
2.使用docker创建的volume卷来启动容器
docker volume create gitlab-runner-config
docker run -d --name gitlab-runner --restart always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v gitlab-runner-config:/etc/gitlab-runner \
    gitlab/gitlab-runner:latest

3.注册runner(docker)

  1. 运行以下命令

    本地卷

   docker run --rm -it -v /srv/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner register
    docker卷
 docker run --rm -it -v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner:latest register

2.输入gitlab的url
3.输入gitlab上面runner的token
4.输入对于runner的描述. 可以随后进行修改
5.输入 关于runner的tags , 用","来分割。可以随后进行修改。
6.(可选)输入维护人员名字
7.输入runner excutor类型例如shell,docker,大多数选择docker
8.If you entered docker as your executor, you are asked for the default image to be used for projects that do not define one in .gitlab-ci.yml.(默认docker:latest)
ps问题:
How to fix Gitlab CI error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on … no such host
解决:
修改config.toml文件

privileged = true
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]

4.重启runner

修改完config.toml文件需要重启

docker restart gitlab-runner
 类似资料: