当前位置: 首页 > 软件库 > 云计算 > 云原生 >

k8s-device-plugin

NVIDIA device plugin for Kubernetes
授权协议 Apache-2.0 License
开发语言 Google Go
所属分类 云计算、 云原生
软件类型 开源软件
地区 不详
投 递 者 徐瀚
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

NVIDIA device plugin for Kubernetes

Table of Contents

About

The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically:

  • Expose the number of GPUs on each nodes of your cluster
  • Keep track of the health of your GPUs
  • Run GPU enabled containers in your Kubernetes cluster.

This repository contains NVIDIA's official implementation of the Kubernetes device plugin.

Please note that:

  • The NVIDIA device plugin API is beta as of Kubernetes v1.10.
  • The NVIDIA device plugin is still considered beta and is missing
    • More comprehensive GPU health checking features
    • GPU cleanup features
    • ...
  • Support will only be provided for the official NVIDIA device plugin (and notfor forks or other variants of this plugin).

Prerequisites

The list of prerequisites for running the NVIDIA device plugin is described below:

  • NVIDIA drivers ~= 384.81
  • nvidia-docker version > 2.0 (see how to install and it's prerequisites)
  • docker configured with nvidia as the default runtime.
  • Kubernetes version >= 1.10

Quick Start

Preparing your GPU Nodes

The following steps need to be executed on all your GPU nodes.This README assumes that the NVIDIA drivers and nvidia-docker have been installed.

Note that you need to install the nvidia-docker2 package and not the nvidia-container-toolkit.This is because the new --gpus options hasn't reached kubernetes yet. Example:

# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

$ sudo apt-get update && sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker

You will need to enable the nvidia runtime as your default runtime on your node.We will be editing the docker daemon config file which is usually present at /etc/docker/daemon.json:

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

if runtimes is not already present, head to the install page of nvidia-docker

Enabling GPU Support in Kubernetes

Once you have configured the options above on all the GPU nodes in yourcluster, you can enable GPU support by deploying the following Daemonset:

$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.9.0/nvidia-device-plugin.yml

Note: This is a simple static daemonset meant to demonstrate the basicfeatures of the nvidia-device-plugin. Please see the instructions below forDeployment via helm when deploying the plugin in aproduction setting.

Running GPU Jobs

With the daemonset deployed, NVIDIA GPUs can now be requested by a containerusing the nvidia.com/gpu resource type:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/cuda:9.0-devel
      resources:
        limits:
          nvidia.com/gpu: 2 # requesting 2 GPUs
    - name: digits-container
      image: nvcr.io/nvidia/digits:20.12-tensorflow-py3
      resources:
        limits:
          nvidia.com/gpu: 2 # requesting 2 GPUs

WARNING: if you don't request GPUs when using the device plugin with NVIDIA images allthe GPUs on the machine will be exposed inside your container.

Deployment via helm

The preferred method to deploy the device plugin is as a daemonset using helm.Instructions for installing helm can be foundhere.

The helm chart for the latest release of the plugin (v0.9.0) includesa number of customizable values. The most commonly overridden ones are:

  failOnInitError:
      fail the plugin if an error is encountered during initialization, otherwise block indefinitely
      (default 'true')
  compatWithCPUManager:
      run with escalated privileges to be compatible with the static CPUManager policy
      (default 'false')
  legacyDaemonsetAPI:
      use the legacy daemonset API version 'extensions/v1beta1'
      (default 'false')
  migStrategy:
      the desired strategy for exposing MIG devices on GPUs that support it
      [none | single | mixed] (default "none")
  deviceListStrategy:
      the desired strategy for passing the device list to the underlying runtime
      [envvar | volume-mounts] (default "envvar")
  deviceIDStrategy:
      the desired strategy for passing device IDs to the underlying runtime
      [uuid | index] (default "uuid")
  nvidiaDriverRoot:
      the root path for the NVIDIA driver installation (typical values are '/' or '/run/nvidia/driver')

When set to true, the failOnInitError flag fails the plugin if an error isencountered during initialization. When set to false, it prints an errormessage and blocks the plugin indefinitely instead of failing. Blockingindefinitely follows legacy semantics that allow the plugin to deploysuccessfully on nodes that don't have GPUs on them (and aren't supposed to haveGPUs on them) without throwing an error. In this way, you can blindly deploy adaemonset with the plugin on all nodes in your cluster, whether they have GPUson them or not, without encountering an error. However, doing so means thatthere is no way to detect an actual error on nodes that are supposed to haveGPUs on them. Failing if an initilization error is encountered is now thedefault and should be adopted by all new deployments.

The compatWithCPUManager flag configures the daemonset to be able tointeroperate with the static CPUManager of the kubelet. Setting this flagrequires one to deploy the daemonset with elevated privileges, so only do so ifyou know you need to interoperate with the CPUManager.

The legacyDaemonsetAPI flag configures the daemonset to use versionextensions/v1beta1 of the DaemonSet API. This API version was removed inKubernetes v1.16, so is only intended to allow newer plugins to run on olderversions of Kubernetes.

The migStrategy flag configures the daemonset to be able to exposeMulti-Instance GPUs (MIG) on GPUs that support them. More information on whatthese strategies are and how they should be used can be found in SupportingMulti-Instance GPUs (MIG) inKubernetes.

Note: With a migStrategy of mixed, you will have additional resourcesavailable to you of the form nvidia.com/mig-<slice_count>g.<memory_size>gbthat you can set in your pod spec to get access to a specific MIG device.

The deviceListStrategy flag allows one to choose which strategy the pluginwill use to advertise the list of GPUs allocated to a container. This istraditionally done by setting the NVIDIA_VISIBLE_DEVICES environment variableas describedhere.This strategy can be selected via the (default) envvar option. Support wasrecently added to the nvidia-container-toolkit to also allow passing the listof devices as a set of volume mounts instead of as an environment variable.This strategy can be selected via the volume-mounts option. Details for therationale behind this strategy can be foundhere.

The deviceIDStrategy flag allows one to choose which strategy the plugin willuse to pass the device ID of the GPUs allocated to a container. The device IDhas traditionally been passed as the UUID of the GPU. This flag lets a userdecide if they would like to use the UUID or the index of the GPU (as seen inthe output of nvidia-smi) as the identifier passed to the underlying runtime.Passing the index may be desirable in situations where pods that have beenallocated GPUs by the plugin get restarted with different physical GPUsattached to them.

Please take a look in the following values.yaml file to see the full set ofoverridable parameters for the device plugin.

Installing via helm installfrom the nvidia-device-plugin helm repository

The preferred method of deployment is with helm install via thenvidia-device-plugin helm repository.

This repository can be installed as follows:

$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
$ helm repo update

Once this repo is updated, you can begin installing packages from it to depoloythe nvidia-device-plugin daemonset. Below are some examples of deploying theplugin with the various flags from above.

Note: Since this is a pre-release version, you will need to pass the--devel flag to helm search repo in order to see this release listed.

Using the default values for the flags:

$ helm install \
    --version=0.9.0 \
    --generate-name \
    nvdp/nvidia-device-plugin

Enabling compatibility with the CPUManager and running with a request for100ms of CPU time and a limit of 512MB of memory.

$ helm install \
    --version=0.9.0 \
    --generate-name \
    --set compatWithCPUManager=true \
    --set resources.requests.cpu=100m \
    --set resources.limits.memory=512Mi \
    nvdp/nvidia-device-plugin

Use the legacy Daemonset API (only available on Kubernetes < v1.16):

$ helm install \
    --version=0.9.0 \
    --generate-name \
    --set legacyDaemonsetAPI=true \
    nvdp/nvidia-device-plugin

Enabling compatibility with the CPUManager and the mixed migStrategy

$ helm install \
    --version=0.9.0 \
    --generate-name \
    --set compatWithCPUManager=true \
    --set migStrategy=mixed \
    nvdp/nvidia-device-plugin

Deploying via helm install with a direct URL to the helm package

If you prefer not to install from the nvidia-device-plugin helm repo, you canrun helm install directly against the tarball of the plugin's helm package.The examples below install the same daemonsets as the method above, except thatthey use direct URLs to the helm package instead of the helm repo.

Using the default values for the flags:

$ helm install \
    --generate-name \
    https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.9.0.tgz

Enabling compatibility with the CPUManager and running with a request for100ms of CPU time and a limit of 512MB of memory.

$ helm install \
    --generate-name \
    --set compatWithCPUManager=true \
    --set resources.requests.cpu=100m \
    --set resources.limits.memory=512Mi \
    https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.9.0.tgz

Use the legacy Daemonset API (only available on Kubernetes < v1.16):

$ helm install \
    --generate-name \
    --set legacyDaemonsetAPI=true \
    https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.9.0.tgz

Enabling compatibility with the CPUManager and the mixed migStrategy

$ helm install \
    --generate-name \
    --set compatWithCPUManager=true \
    --set migStrategy=mixed \
    https://nvidia.github.com/k8s-device-plugin/stable/nvidia-device-plugin-0.9.0.tgz

Building and Running Locally

The next sections are focused on building the device plugin locally and running it.It is intended purely for development and testing, and not required by most users.It assumes you are pinning to the latest release tag (i.e. v0.9.0), but caneasily be modified to work with any available tag or branch.

With Docker

Build

Option 1, pull the prebuilt image from Docker Hub:

$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.9.0
$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.9.0 nvcr.io/nvidia/k8s-device-plugin:devel

Option 2, build without cloning the repository:

$ docker build \
    -t nvcr.io/nvidia/k8s-device-plugin:devel \
    -f docker/Dockerfile \
    https://github.com/NVIDIA/k8s-device-plugin.git#v0.9.0

Option 3, if you want to modify the code:

$ git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin
$ docker build \
    -t nvcr.io/nvidia/k8s-device-plugin:devel \
    -f docker/Dockerfile \
    .

Run

Without compatibility for the CPUManager static policy:

$ docker run \
    -it \
    --security-opt=no-new-privileges \
    --cap-drop=ALL \
    --network=none \
    -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \
    nvcr.io/nvidia/k8s-device-plugin:devel

With compatibility for the CPUManager static policy:

$ docker run \
    -it \
    --privileged \
    --network=none \
    -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \
    nvcr.io/nvidia/k8s-device-plugin:devel --pass-device-specs

Without Docker

Build

$ C_INCLUDE_PATH=/usr/local/cuda/include LIBRARY_PATH=/usr/local/cuda/lib64 go build

Run

Without compatibility for the CPUManager static policy:

$ ./k8s-device-plugin

With compatibility for the CPUManager static policy:

$ ./k8s-device-plugin --pass-device-specs

Changelog

Version v0.9.0

  • Fix bug when using CPUManager and the device plugin MIG mode not set to "none"
  • Allow passing list of GPUs by device index instead of uuid
  • Move to urfave/cli to build the CLI
  • Support setting command line flags via environment variables

Version v0.8.2

  • Update all dockerhub references to nvcr.io

Version v0.8.1

  • Fix permission error when using NewDevice instead of NewDeviceLite when constructing MIG device map

Version v0.8.0

  • Raise an error if a device has migEnabled=true but has no MIG devices
  • Allow mig.strategy=single on nodes with non-MIG gpus

Version v0.7.3

  • Update vendoring to include bug fix for nvmlEventSetWait_v2

Version v0.7.2

  • Fix bug in dockfiles for ubi8 and centos using CMD not ENTRYPOINT

Version v0.7.1

  • Update all Dockerfiles to point to latest cuda-base on nvcr.io

Version v0.7.0

  • Promote v0.7.0-rc.8 to v0.7.0

Version v0.7.0-rc.8

  • Permit configuration of alternative container registry through environment variables.
  • Add an alternate set of gitlab-ci directives under .nvidia-ci.yml
  • Update all k8s dependencies to v1.19.1
  • Update vendoring for NVML Go bindings
  • Move restart loop to force recreate of plugins on SIGHUP

Version v0.7.0-rc.7

  • Fix bug which only allowed running the plugin on machines with CUDA 10.2+ installed

Version v0.7.0-rc.6

  • Add logic to skip / error out when unsupported MIG device encountered
  • Fix bug treating memory as multiple of 1000 instead of 1024
  • Switch to using CUDA base images
  • Add a set of standard tests to the .gitlab-ci.yml file

Version v0.7.0-rc.5

  • Add deviceListStrategyFlag to allow device list passing as volume mounts

Version v0.7.0-rc.4

  • Allow one to override selector.matchLabels in the helm chart
  • Allow one to override the udateStrategy in the helm chart

Version v0.7.0-rc.3

  • Fail the plugin if NVML cannot be loaded
  • Update logging to print to stderr on error
  • Add best effort removal of socket file before serving
  • Add logic to implement GetPreferredAllocation() call from kubelet

Version v0.7.0-rc.2

  • Add the ability to set 'resources' as part of a helm install
  • Add overrides for name and fullname in helm chart
  • Add ability to override image related parameters helm chart
  • Add conditional support for overriding secutiryContext in helm chart

Version v0.7.0-rc.1

  • Added migStrategy as a parameter to select the MIG strategy to the helm chart
  • Add support for MIG with different strategies {none, single, mixed}
  • Update vendored NVML bindings to latest (to include MIG APIs)
  • Add license in UBI image
  • Update UBI image with certification requirements

Version v0.6.0

  • Update CI, build system, and vendoring mechanism
  • Change versioning scheme to v0.x.x instead of v1.0.0-betax
  • Introduced helm charts as a mechanism to deploy the plugin

Version v0.5.0

  • Add a new plugin.yml variant that is compatible with the CPUManager
  • Change CMD in Dockerfile to ENTRYPOINT
  • Add flag to optionally return list of device nodes in Allocate() call
  • Refactor device plugin to eventually handle multiple resource types
  • Move plugin error retry to event loop so we can exit with a signal
  • Update all vendored dependencies to their latest versions
  • Fix bug that was inadvertently always disabling health checks
  • Update minimal driver version to 384.81

Version v0.4.0

  • Fixes a bug with a nil pointer dereference around getDevices:CPUAffinity

Version v0.3.0

  • Manifest is updated for Kubernetes 1.16+ (apps/v1)
  • Adds more logging information

Version v0.2.0

  • Adds the Topology field for Kubernetes 1.16+

Version v0.1.0

  • If gRPC throws an error, the device plugin no longer ends up in a non responsive state.

Version v0.0.0

  • Reversioned to SEMVER as device plugins aren't tied to a specific version of kubernetes anymore.

Version v1.11

  • No change.

Version v1.10

  • The device Plugin API is now v1beta1

Version v1.9

  • The device Plugin API changed and is no longer compatible with 1.8
  • Error messages were added

Issues and Contributing

Checkout the Contributing document!

Versioning

Before v1.10 the versioning scheme of the device plugin had to match exactly the version of Kubernetes.After the promotion of device plugins to beta this condition was was no longer required.We quickly noticed that this versioning scheme was very confusing for users as they still expected to seea version of the device plugin for each version of Kubernetes.

This versioning scheme applies to the tags v1.8, v1.9, v1.10, v1.11, v1.12.

We have now changed the versioning to follow SEMVER. Thefirst version following this scheme has been tagged v0.0.0.

Going forward, the major version of the device plugin will only changefollowing a change in the device plugin API itself. For example, versionv1beta1 of the device plugin API corresponds to version v0.x.x of thedevice plugin. If a new v2beta2 version of the device plugin API comes out,then the device plugin will increase its major version to 1.x.x.

As of now, the device plugin API for Kubernetes >= v1.10 is v1beta1. If youhave a version of Kubernetes >= 1.10 you can deploy any device plugin version >v0.0.0.

Upgrading Kubernetes with the Device Plugin

Upgrading Kubernetes when you have a device plugin deployed doesn't require youto do any, particular changes to your workflow. The API is versioned and ispretty stable (though it is not guaranteed to be non breaking). Starting withKubernetes version 1.10, you can use v0.3.0 of the device plugin to performupgrades, and Kubernetes won't require you to deploy a different version of thedevice plugin. Once a node comes back online after the upgrade, you will seeGPUs re-registering themselves automatically.

Upgrading the device plugin itself is a more complex task. It is recommended todrain GPU tasks as we cannot guarantee that GPU tasks will survive a rollingupgrade. However we make best efforts to preserve GPU tasks during an upgrade.

  • 安装方法 Installation Guide — NVIDIA Cloud Native Technologies documentation 1.本地节点添加 NVIDIA 驱动程序 要求:NVIDIA drivers ~= 384.81 先确保你的主机上的 NVIDIA 驱动程序正常工作,你应该能够成功运行 nvidia-smi 并查看你的 GPU 名称、驱动程序版本和 CUDA 版本 $

  • Preparing your GPU Nodes The following steps need to be executed on all your GPU nodes. This README assumes that the NVIDIA drivers and nvidia-docker have been installed. Note that you need to install

  • 基本概念入门: Device Manager Proposal Device plugin offical Doc(中文) device-plugins offical Doc(En)   Go through Intel FPGA Plugin code 1.  cmd/fpga_plugin/fpga_plugin.go 生成一个新的puglin, pulgin传入的信息sysfs,devfs

  • https://www.cnblogs.com/oolo/p/11672720.html k8s device plugin 分析 device plugin 工作流程 技术细节 DP 启动的入口函数? DP 流程 DP 如何知道 DM 的 Unix socket 地址? Register 的细节 DM 调用 DP ListAndWatch 的时机? ListAndWatch 的参数 Alloca

  • Deployment是新一代用于Pod管理的对象,与Replication Controller相比,它提供了更加完善的功能,使用起来更加简单方便。 注:本文进行的相关操作是基于k8s 1.2.2版本执行的。 Deployment相关操作 创建 我们可以使用下面的yaml文件来创建一个Deployment: apiVersion: extensions/v1beta1 kind: Deployme

  • 1 关键概念 Cluster 集群是一组物理机或虚拟机,用于让Kubernetes运行应用程序的。 Node 节点是一个物理机或虚拟机,上面运行着Kubernetes,它上面的Pod可以被调度。 Pod 在Kubernetes系统中,调度的最小颗粒不是单纯的容器,而是抽象成一个Pod,Pod是一个可以被创建、销毁、调度、管理的最小的部署单元。 比如一个或一组共享卷的容器。 Replication

  • k8s在部署etcd单节点报错 failed to dial fab7ddbd0aa3e803 on stream Message (dial t…out) 这里需要安装docker环境,下面是docker一键部署安装 #!/bin/bash yum install -y yum-utils device-mapper-persistent-data lvm2 if [ $? = 0 ];then

  • 无法删除/var/lib/kubelet目录 [root@k8s-node2 k8s_node]# rm -rf /var/lib/kubelet rm: cannot remove ‘/var/lib/kubelet/pods/0552a913-ea4a-4b91-84a7-87ca6d9f8611/volumes/kubernetes.io~secret/etcd-certs’: Device

  • replication controller简称rc,实际上rc和rs的功能几乎是一致的,rs算是对rc的改进,目前唯一的区别RC只支持基于等式的selector ,但rs还支持基于集合的selector,这对复杂的运维管理就非常方便了。 RC只支持单个label的等式,rs中的label selectos支持matchlabels和matchexpressions两种形式 selector:

  • Kubernetes plugin - “name not specified for an operation requiring one” jenkins中的Kubernetes client API 插件:“6.4.1-208.vfe09a_9362c2c”, 和Kubernetes插件3724.v0920c1e0ec69 在今天早上升级上述插件后,以前工作的 kubernetes 代理定义

 相关资料
  • Device The device object describes the device's hardware and software. Properties device.name device.cordova device.platform device.uuid device.version device.model Variable Scope Since device is assi

  • 继承自NativeObject 提供应用运行设备的信息。 通过“const {device} = require('tabris');”引入该对象 示例: let lang = device.language; device.on("orientationChanged", ({value: orientation}) => console.log("new orientation: ", or

  • 获取系统信息。 安装 $ npm install universal-device --save APIS appName: string app 名称或浏览器名称。 platform: string 平台: 'Android'、'iOS'、'MacIntel' 等。 screenWidth: number 屏幕宽度,单位为 px。 screenHeight: number 屏幕高度,单位为 px

  • Device模块管理设备信息,用于获取手机设备的相关信息,如IMEI、IMSI、型号、厂商等。通过plus.device获取设备信息管理对象。 属性: imei: 设备的国际移动设备身份码 imsi: 设备的国际移动用户识别码 model: 设备的型号 vendor: 设备的生产厂商 uuid: 设备的唯一标识 方法: beep: 发出蜂鸣声 dial: 拨打电话 getInfo: 获取设备信息

  • This plugin defines a global device object, which describes the device's hardware and software. Although the object is in the global scope, it is not available until after the deviceready event. docum

  • Device 是轻量级工具,检测当前的设备和屏幕尺寸。 设备版本使用示例:  func myFunc() {        /*** Display the device version ***/        switch Device.version() {            /*** iPhone ***/            case .iPhone4:       print("I