Installation Guide — NVIDIA Cloud Native Technologies documentation
要求:NVIDIA drivers ~= 384.81
先确保你的主机上的 NVIDIA 驱动程序正常工作,你应该能够成功运行 nvidia-smi 并查看你的 GPU 名称、驱动程序版本和 CUDA 版本
$ nvidia-smi Thu Jul 14 11:49:33 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.57 Driver Version: 515.57 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:02:00.0 Off | N/A | | 0% 48C P8 11W / 200W | 0MiB / 8192MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
需要注意的是,第一次安装显卡驱动的话,是不用重启服务器的
nvidia-docker >= 2.0 || nvidia-container-toolkit >= 1.7.0
运行NVIDIA Container Toolkit的条件:
如:centos, nvidia-container-toolkit
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
$ yum install -y nvidia-container-toolkit
$ rpm -qa | grep nvidia
libnvidia-container-tools-1.10.0-1.x86_64
libnvidia-container1-1.10.0-1.x86_64
nvidia-container-toolkit-1.10.0-1.x86_64
$ cat /etc/containerd/config.toml | grep BinaryName -C6
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
NoPivotRoot = false
NoNewKeyring = false
ShimCgroup = ""
IoUid = 0
IoGid = 0
BinaryName = "/usr/bin/nvidia-container-runtime" //修改此处即可
Root = ""
CriuPath = ""
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
$ systemctl daemon-reload
$ systemctl restart containerd
#1.0.0-beta4
$ docker pull nvidia/k8s-device-plugin:1.0.0-beta4
$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/1.0.0-beta4/nvidia-device-plugin.yml
# 或1.12
$ docker pull nvidia/k8s-device-plugin:1.11
$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.12/nvidia-device-plugin.yml
Kubernetes将暴露 amd.com/gpu或nvidia.com/gpu为可调度的资源
$ kubectl describe node | grep nvidia.com/gpu
$ docker run --name hfftest --rm -it --gpus all nvidia/cuda:10.0-base nvidia-smi Thu Jul 14 04:54:04 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.57 Driver Version: 515.57 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:02:00.0 Off | N/A | | 21% 49C P8 16W / 200W | 0MiB / 8192MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
k8s:
apiVersion: v1
kind: Pod
metadata:
name: test-gpu
spec:
restartPolicy: OnFailure
containers:
- name: test-gpu
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1
一些限制的:
你可以指定 GPU 的 limits 而不指定其 requests,Kubernetes 将使用限制 值作为默认的请求值;
你可以同时指定 limits 和 requests,不过这两个值必须相等。
你不可以仅指定 requests 而不指定 limits。