Ubuntu18.04下安装NVIDIA显卡驱动、docker、nvidia-docker;容器中编译安装opencv-4.4.0与darknet-yolov4并完成测试;容器封装镜像转移。2022

艾阳羽
2023-12-01

记录一下第一次在CSDN发博客,欢迎大家光临~


前言

最近这几天学习docker容器使用及配置,完成了以下内容:

1.Ubuntu 18.04.6LTS系统安装
2.为系统更换国内软件源
3.安装NVIDIA显卡驱动(RTX2060 notebook)
4.安装Docker
5.配置Docker镜像加速
6.安装nvidia-docker
7.拉取nvidia/cuda镜像并运行及测试
8.在容器中编译安装opencv-4.4.0、opencv_contrib-4.4.0
9.在容器中编译安装darknet-yolov4并进行测试
10.镜像复用:容器打包镜像、压缩镜像、转移镜像
以上内容也借鉴了好多前辈的文章,链接地址最后都会列出~


下面将以上10项内容逐一叙述,其中每一项都可拆开来看。如:只想在电脑上编译安装opencv4.4.0就可以直接看2.5.2。`
关于目录:涉及到拷贝的目录为宿主机主目录“/home/heqingchun”,容器内根目录“/”。

一、宿主机配置

这里面的“宿主机”是用来区分容器的,这里可以理解为你的电脑。

1.安装Ubunntu18.04 64位系统

安装系统请见,这里我就不详细说了,我认为你们都是已经有系统的了:直接点击或者复制网址都可https://blog.csdn.net/baidu_36602427/article/details/86548203

2.为宿主机系统更换国内软件源Ubuntu 官方源服务器在欧洲,国内访问很慢。所以这里有必要将软件源更换为国内的源,

国内源很多,在这里我们选择阿里云与清华大学的 Ubuntu 源

2.1-复制以下内容

# 阿里云源
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
##測試版源
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
# 源碼
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
##測試版源
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse


# 清华大学源
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
##測試版源
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
# 源碼
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
##測試版源
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse

2.2-备份文件

Ubuntu 的源存放在在 /etc/apt/ 目录下的 sources.list 文件中,修改前我们先备份,在终端中执行以下命令:
sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup

2.3-更新文件内容

然后执行下面的命令打开 sources.list 文件,清空里面的内容,把上面阿里云与清华大学的 Ubuntu 源复制进去,保存后退出。
sudo gedit /etc/apt/sources.list

2.4-更新软件列表

接着在终端上执行以下命令更新软件列表,检测出可以更新的软件:
sudo apt-get update

2.5-进行软件更新

最后在终端上执行以下命令进行软件更新:
sudo apt-get upgrade

3.在宿主机中安装NVIDIA显卡驱动

3.1.安装驱动前一定要更新软件列表和安装必要软件、依赖(必须)

sudo apt-get update   #更新软件列表
sudo apt-get install g++
sudo apt-get install make

3.2.根据显卡型号去NVIDIA官网下载ubuntu驱动

网址:https://www.nvidia.cn/Download/index.aspx?lang=cn

3.3.开始安装

3.3.1-卸载原有驱动

sudo apt-get remove --purge nvidia*

3.3.2-禁用nouveau(nouveau是通用的驱动程序)(必须)

sudo gedit /etc/modprobe.d/blacklist.conf 或者(blacklist-nouveau.conf)

3.3.3-在打开的blacklist.conf末尾添加如下,保存文本关闭

blacklist nouveau
options nouveau modeset=0

3.3.4-在终端输入如下更新,更新结束后重启电脑(必须)

sudo update-initramfs –u

3.3.5-重启后在终端输入如下,没有任何输出表示屏蔽成功

lsmod | grep nouveau

3.3.6-安装lightdm,lightdm是显示管理器,主要管理登录界面,安装界面选择“lightdm”

sudo apt-get install lightdm
	***注:完成3.3.7后电脑将会进入纯字符界面,记不住命令记得拍照***

3.3.7-为了安装新的Nvidia驱动程序,我们需要停止当前的显示服务器。(必须)

sudo telinit 3
进入界面后先打出用户名回车后再打密码进行登陆,注意此时右侧数字小键盘不可用

3.3.8-在文本界面中,禁用X-window服务,在终端输入(必须)

sudo /etc/init.d/lightdm stop或者(sudo service lightdm stop)

3.3.9-cd命令进入到你存放驱动的目录,输入命令:

sudo chmod 777 NVIDIA-Linux-x86_64-430.26.run   #给你下载的驱动赋予可执行权限,才可以安装
sudo ./NVIDIA-Linux-x86_64-430.26.run –no-opengl-files   #安装

第二句命令的参数介绍:–no-opengl-files
只安装驱动文件,不安装OpenGL文件。这个参数台式机不加没问题,笔记本不加有可能出现循环登录。看个人需要。
显卡驱动安装过程中一些选项(有一些问题记不清楚了,只给出需要选择的选项:):
1.The distribution-provided pre-install script failed! Are you sure you want to continue?
选择continue installation
2.Would you like to register the kernel module souces with DKMS? This will allow DKMS to automatically build a new module, if you install a different kernel later?
选择 No 继续。
3.问题没记住,选项是:install without signing
4.问题大概是:Nvidia’s 32-bit compatibility libraries? 选择 No 继续。
5.Would you like to run the nvidia-xconfigutility to automatically update your x configuration so that the NVIDIA x driver will be used when you restart x? Any pre-existing x confile will be backed up.
​​​​​​​ 选择 Yes 继续

3.3.10-安装结束后输入sudo service lightdm start 重启x-window服务,即可自动进入登陆界面。不行的话,如果是双显卡可能需要bios设置独显直连,然后重启电脑。

4.在宿主机中安装docker

4.1-卸载旧版本----注:如果之前未安装可以略过

Docker 的旧版本被称为 docker,docker.io 或 docker-engine 。
sudo apt-get remove docker docker-engine docker.io containerd runc

4.2-在新主机上首次安装 Docker Engine-Community 之前,需要设置 Docker 仓库。之后就可以从仓库安装和更新 Docker,设置仓库,更新 apt 包索引。

sudo apt-get update

4.3-安装 apt 依赖包,用于通过HTTPS来获取仓库:

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

4.4-添加 Docker 的官方 GPG 密钥:

curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

4.5-(9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88) 通过搜索指纹的后8个字符,验证现在是否拥有带有指纹的密钥。

sudo apt-key fingerprint 0EBFCD88
执行命令后看到如下信息:
“pub   rsa4096 2017-02-22 [SCEA]
        9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
	uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
	sub   rsa4096 2017-02-22 [S]”

4.6-使用以下指令设置稳定版仓库

sudo add-apt-repository \
   	"deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/ \
  	$(lsb_release -cs) \
  	stable"

4.7-准备安装Docker Engine-Community

更新 apt 包索引。
sudo apt-get update

4.8-安装最新版本的 Docker Engine-Community 和 containerd :

sudo apt-get install docker-ce docker-ce-cli containerd.io

4.9-测试是否安装成功,输入以下指令,打印出以下信息则安装成功:--------注:需要连接互联网,因为本地没有镜像会去仓库拉取

sudo docker run hello-world
打印信息:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete                                                       Digest: sha256:c3b4ada4687bbaa170745b3e4dd8ac3f194ca95b2d0518b417fb47e5879d9b5f
Status: Downloaded newer image for hello-world:latest

	Hello from Docker!
	This message shows that your installation appears to be working correctly.


	To generate this message, Docker took the following steps:
 	1. The Docker client contacted the Docker daemon.
 	2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    	(amd64)
 	3. The Docker daemon created a new container from that image which runs the
    	executable that produces the output you are currently reading.
 	4. The Docker daemon streamed that output to the Docker client, which sent it
    	to your terminal.


	To try something more ambitious, you can run an Ubuntu container with:
 	$ docker run -it ubuntu bash

	Share images, automate workflows, and more with a free Docker ID:
 	https://hub.docker.com/

	For more examples and ideas, visit:
 	https://docs.docker.com/get-started/

4.10-使用命令查看刚才的hello-world镜像是否下载到本地

sudo docker images

4.11-目前docker指令必须要管理员权限,非常麻烦,下面进行权限提升

把普通用户加入到docker组中 
echo $USER------------打印用户名-可省略
groupadd docker----------添加docker组
sudo gpasswd -a $USER docker
newgrp docker
现在使用语句时不用每次都使用管理员权限了,如:
docker ps
docker images

5.Docker 镜像加速(很有必要)

国内从 DockerHub 拉取镜像有时会遇到困难,此时可以配置镜像加速器。
Docker 官方和国内很多云服务商都提供了国内加速器服务,例如:
科大镜像:https://docker.mirrors.ustc.edu.cn/
网易:https://hub-mirror.c.163.com/
阿里云:https://<你的ID>.mirror.aliyuncs.com
七牛云加速器:https://reg-mirror.qiniu.com

5.1-阿里云镜像加速配置

5.1.1-进入官方网站,进行登陆

网址:https://www.aliyun.com/

5.1.2-点击右上角“控制台”

5.1.3-点击“容器镜像服务”

5.1.4-在左侧点击“镜像工具”下的“镜像加速器”

5.1.5-根据操作文档操作即可

检查加速器是否生效配置加速器之后,如果拉取镜像仍然十分缓慢,请手动检查加速器配置是否生效。
在命令行执行 docker info,如果从结果中看到了如下内容,说明配置成功。
docker info
		Registry Mirrors:
  		https://ta2godbp.mirror.aliyuncs.com/

6.在宿主机中安装nvidia-docker

Docker只能使用CPU 的资源,需要连接Docker 和宿主机的显卡驱动 —— nvidia-docker

6.1-设置稳定版的存储库和GPG密钥

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   	&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   	&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

6.2-更新

sudo apt-get update

6.3-安装

sudo apt-get install -y nvidia-docker2

6.4-重启docker

sudo systemctl restart docker
到这里,宿主机的配置已经全部完成,接下来在宿主机中下载官方nvidia/cuda镜像

二、镜像、容器配置

1.先在宿主机中下载官方预配置的nvidia/cuda镜像,在此基础上进行配置

2.1-配置镜像源

sudo vim /etc/docker/daemon.json

2.2-加入镜像地址

添加"registry-mirrors": [https://docker.mirrors.ustc.edu.cn]
添加之后状态如下:
{
  	    "registry-mirrors": ["https://ta2godbp.mirror.aliyuncs.com"]
}

2.3-重启docker

systemctl restart docker.service

2.4-下载镜像

2.4.1-Nvidia docker cuda 官方镜像网址

https://hub.docker.com/r/nvidia/cuda

2.4.2-进入网址后找到“Supported tags”

然后点击下方“For a full list of supported tags.click here”中的here

2.4.3-在新页面中找到需要的版本,如:

找到“ubuntu18.04”下的“CUDA 11.4.3”,复制下方的“11.4.3-cudnn8-devel-ubuntu18.04”

2.4.4-再回到2.4.1打开的网址界面

点击“tags”,将复制的内容粘贴到“Filter Tags”搜索框中
搜索完毕之后在右侧会出现“docker pull nvidia/cuda:11.4.3-cudnn8-devel-ubuntu18.04”
复制下来在宿主机中执行下载。
(如果设置阿里云镜像容器加速大概下载20分钟,如果没有设置加速器大约需要5个小时!)
复制后在主机上执行:
docker pull nvidia/cuda:11.4.3-cudnn8-devel-ubuntu18.04

2.4.5-使用docker images查看镜像

docker images
显示如下:
REPOSITORY    TAG                               IMAGE ID       CREATED        SIZE
nvidia/cuda   11.4.3-cudnn8-devel-ubuntu18.04   90b42b6501f7   2 weeks ago    9.11GB

2.4.6-运行镜像前需要注册nvidia-docker

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/systemd/system/docker.service.d/override.conf <<EOF
		[Service]
		ExecStart=
		ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
		EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

2.5-运行镜像

sudo docker run -it --name test_name -v /data/test_name:/data/test_name --runtime=nvidia -e NVIDIA_VISIBLE_DEVICE=all nvidia/cuda:11.4.3-cudnn8-devel-ubuntu18.04
(test_name为自定义容器名);
(/data/algorithm:/data/test_name为项目绝对路径:容器内项目路径);
(nvidia/cuda:11.1-cudnn8-devel-ubuntu18.04为镜像名称。)
运行成功后,命令行前缀改变,此时运行的就是容器内部的bash了
(由heqingchun@Legion:~$变为root@f68aa5e2daec:/#)

2.5.1-测试基础环境是否安装成功-要在容器中测试

2.5.1.1-测试显卡驱动
nvidia-smi
显示如下:
Wed May 18 01:32:47 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.68.02    Driver Version: 510.68.02    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |
| N/A   48C    P8     8W /  N/A |    114MiB /  6144MiB |     16%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
2.5.1.2-测试 CUDA
nvcc -V
显示如下:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_Oct_11_21:27:02_PDT_2021
Cuda compilation tools, release 11.4, V11.4.152
Build cuda_11.4.r11.4/compiler.30521435_0
2.5.1.3-测试 CUDNN
ll /usr/lib/x86_64-linux-gnu/ | grep cudnn
显示如下:
lrwxrwxrwx  1 root root         29 Apr 29 05:00 libcudnn.so -> /etc/alternatives/libcudnn_so
lrwxrwxrwx  1 root root         17 Aug 31  2021 libcudnn.so.8 -> libcudnn.so.8.2.4
-rw-r--r--  1 root root     158392 Aug 31  2021 libcudnn.so.8.2.4
lrwxrwxrwx  1 root root         39 Apr 29 05:00 libcudnn_adv_infer.so -> /etc/alternatives/libcudnn_adv_infer_so
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_adv_infer.so.8 -> libcudnn_adv_infer.so.8.2.4
-rw-r--r--  1 root root  129423408 Aug 31  2021 libcudnn_adv_infer.so.8.2.4
lrwxrwxrwx  1 root root         39 Apr 29 05:00 libcudnn_adv_train.so -> /etc/alternatives/libcudnn_adv_train_so
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_adv_train.so.8 -> libcudnn_adv_train.so.8.2.4
-rw-r--r--  1 root root   98296496 Aug 31  2021 libcudnn_adv_train.so.8.2.4
lrwxrwxrwx  1 root root         39 Apr 29 05:00 libcudnn_cnn_infer.so -> /etc/alternatives/libcudnn_cnn_infer_so
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_cnn_infer.so.8 -> libcudnn_cnn_infer.so.8.2.4
-rw-r--r--  1 root root  723562112 Aug 31  2021 libcudnn_cnn_infer.so.8.2.4
-rw-r--r--  1 root root  890513380 Aug 31  2021 libcudnn_cnn_infer_static.a
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_cnn_infer_static_v8.a -> libcudnn_cnn_infer_static.a
lrwxrwxrwx  1 root root         39 Apr 29 05:00 libcudnn_cnn_train.so -> /etc/alternatives/libcudnn_cnn_train_so
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_cnn_train.so.8 -> libcudnn_cnn_train.so.8.2.4
-rw-r--r--  1 root root   88248272 Aug 31  2021 libcudnn_cnn_train.so.8.2.4
-rw-r--r--  1 root root  134645628 Aug 31  2021 libcudnn_cnn_train_static.a
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_cnn_train_static_v8.a -> libcudnn_cnn_train_static.a
lrwxrwxrwx  1 root root         39 Apr 29 05:00 libcudnn_ops_infer.so -> /etc/alternatives/libcudnn_ops_infer_so
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_ops_infer.so.8 -> libcudnn_ops_infer.so.8.2.4
-rw-r--r--  1 root root  426627672 Aug 31  2021 libcudnn_ops_infer.so.8.2.4
lrwxrwxrwx  1 root root         39 Apr 29 05:00 libcudnn_ops_train.so -> /etc/alternatives/libcudnn_ops_train_so
lrwxrwxrwx  1 root root         27 Aug 31  2021 libcudnn_ops_train.so.8 -> libcudnn_ops_train.so.8.2.4
-rw-r--r--  1 root root   59658376 Aug 31  2021 libcudnn_ops_train.so.8.2.4
-rw-r--r--  1 root root 1374752618 Aug 31  2021 libcudnn_static.a
lrwxrwxrwx  1 root root         17 Aug 31  2021 libcudnn_static_v8.a -> libcudnn_static.a

2.5.2-在容器中安装opencv-4.4.0、opencv_contrib-4.4.0

2.5.2.1-首先准备文件
以下三个文件我先放在宿主机的家目录中了。
opencv-4.4.0.zip
opencv_contrib-4.4.0.zip
boostdesc_bgm.i.zip(链接:https://pan.baidu.com/s/1WzDfij41FmaRdPkBKAs9Kw 提取码:1234)
2.5.2.2-在宿主机中查看镜像ID,然后复制下来
docker ps或者docker ps -a
2.5.2.3-将准备文件拷贝进当前容器(ID:47b9d751b8be)
镜像ID注意需要用自己电脑上查到的
docker cp /home/heqingchun/boostdesc_bgm.i.zip 47b9d751b8be:/
将opencv压缩包拷贝进容器
docker cp /home/heqingchun/opencv-4.4.0.zip 47b9d751b8be:/
docker cp /home/heqingchun/opencv_contrib-4.4.0.zip 47b9d751b8be:/
--------以下为在容器内执行--------
不列序号了,大家分步执行即可,我会给出标注:
apt update  //更新
apt-get install axel  //安装下载工具
--如果没有预先在宿主机下载,可以使用下面命令下载(在根目录执行)
--如果已经下载完了,下边的两句话就不用执行了
axel -k https://github.com/opencv/opencv/archive/4.4.0.zip  //下载opencv-4.4.0
axel -k https://github.com/opencv/opencv_contrib/archive/refs/tags/4.4.0.zip  //下载opencv_contrib-4.4.0

apt-get install unzip  //安装解压工具
unzip opencv-4.4.0.zip  //解压opencv-4.4.0.zip
unzip opencv_contrib-4.4.0.zip  //解压opencv_contrib-4.4.0
mv opencv_contrib-4.4.0 opencv-4.4.0  //将opencv_contrib-4.4.0剪切至opencv-4.4.0
boostdesc_bgm.i.zip可以在网上找一找,我也会放出资源
(链接:https://pan.baidu.com/s/1WzDfij41FmaRdPkBKAs9Kw 提取码:1234)
unzip boostdesc_bgm.i.zip  //解压
cp /boostdesc_bgm.i/*.i /opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/src/    //将文件拷贝到指定目录
apt-get install cmake  //安装cmake
apt-get install g++  //安装g++编译器
apt-get install build-essential libgtk2.0-dev libavcodec-dev libavformat-dev libjpeg-dev libswscale-dev libtiff5-dev libgtk2.0-dev pkg-config  //安装其他依赖项
cp opencv-4.4.0/modules/features2d/test/*.impl.hpp opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/test/                //将指定文件拷贝到指定目录
cp opencv-4.4.0/modules/features2d/test/test_invariance_utils.hpp opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/test/      //将指定文件拷贝到指定目录 
sed -i "s/features2d\/test\/test_detectors_regression.impl.hpp/test_detectors_regression.impl.hpp/g" /opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/test/test_features2d.cpp //修改文件内容
sed -i "s/features2d\/test\/test_descriptors_regression.impl.hpp/test_descriptors_regression.impl.hpp/g" /opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/test/test_features2d.cpp //修改文件内容
sed -i "s/features2d\/test\/test_detectors_invariance.impl.hpp/test_detectors_invariance.impl.hpp/g" /opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/test/test_rotation_and_scale_invariance.cpp//修改文件内容
sed -i "s/features2d\/test\/test_descriptors_invariance.impl.hpp/test_descriptors_invariance.impl.hpp/g" /opencv-4.4.0/opencv_contrib-4.4.0/modules/xfeatures2d/test/test_rotation_and_scale_invariance.cpp //修改文件内容
cd opencv-4.4.0  
mkdir -p build
cd build
(cmake执行的时候下载ADE: Download: v0.1.1f.zip的时候很慢,如果不行就多试几次)
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_GENERATE_PKGCONFIG=ON -D OPENCV_ENABLE_NONFREE=YES -D OPENCV_EXTRA_MODULES_PATH=/opencv-4.4.0/opencv_contrib-4.4.0/modules/ ..
make -j64
make install
echo "/usr/local/lib" >> /etc/ld.so.conf.d/opencv4.conf
ldconfig
echo "PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig" >> /etc/bash.bashrc
echo "export PKG_CONFIG_PATH" >> /etc/bash.bashrc
source /etc/bash.bashrc 
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
apt-get install pkg-config 安装插件
pkg-config --modversion opencv4  查看版本
显示:4.4.0表示安装成功

2.5.3-在容器中安装编译darknet-yolov4并进行测试

2.5.3.1-下载darknet源码以及yolov4.cfg、yolov4.weights、测试图片:dog.jpg
从宿主机中下载拷贝至容器:
docker cp /home/heqingchun/yolov4.cfg dc9f6a655775:/
docker cp /home/heqingchun/yolov4.weights dc9f6a655775:/
docker cp /home/heqingchun/darknet-master.zip dc9f6a655775:/
docker cp /home/heqingchun/dog.jpg dc9f6a655775:/

或者在容器中下载:axel -k 下载链接(2选1即可)

axel -k https://github.com/AlexeyAB/darknet/archive/refs/heads/master.zip
axel -k https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
axel -k https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg
到这步我就认为你已经把文件都准备好了
2.5.3.2-解压darknet-master.zip
unzip darknet-master.zip
2.5.3.3-配置编译前需要设置项
cd darknet-master
apt-get install vim
vim Makefile #修改Makefile文件
------------------
修改Makefile文件:
GPU=1
CUDNN=1
CUDNN_HALF=1
OPENCV=1
------------------
make -j64
2.5.3.4-运行文件测试是否成功-我把文件都放在容器的根目录了
./darknet detect /yolov4.cfg /yolov4.weights /dog.jpg
显示如下:
CUDA-version: 11040 (11060), cuDNN: 8.2.4, CUDNN_HALF=1, GPU count: 1  
 CUDNN_HALF=1 
 OpenCV version: 4.4.0
 0 : compute_capability = 750, cudnn_half = 1, GPU: NVIDIA GeForce RTX 2060 
net.optimized_memory = 0 
mini_batch = 1, batch = 8, time_steps = 1, train = 0 
   layer   filters  size/strd(dil)      input                output
   0 Create CUDA-stream - 0 
 Create cudnn-handle 0 
conv     32       3 x 3/ 1    608 x 608 x   3 ->  608 x 608 x  32 0.639 BF
   1 conv     64       3 x 3/ 2    608 x 608 x  32 ->  304 x 304 x  64 3.407 BF
   2 conv     64       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  64 0.757 BF
   3 route  1 		                           ->  304 x 304 x  64 
   4 conv     64       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  64 0.757 BF
   5 conv     32       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  32 0.379 BF
   6 conv     64       3 x 3/ 1    304 x 304 x  32 ->  304 x 304 x  64 3.407 BF
   7 Shortcut Layer: 4,  wt = 0, wn = 0, outputs: 304 x 304 x  64 0.006 BF
   8 conv     64       1 x 1/ 1    304 x 304 x  64 ->  304 x 304 x  64 0.757 BF
   9 route  8 2 	                           ->  304 x 304 x 128 
  10 conv     64       1 x 1/ 1    304 x 304 x 128 ->  304 x 304 x  64 1.514 BF
  11 conv    128       3 x 3/ 2    304 x 304 x  64 ->  152 x 152 x 128 3.407 BF
  12 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
  13 route  11 		                           ->  152 x 152 x 128 
  14 conv     64       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x  64 0.379 BF
  15 conv     64       1 x 1/ 1    152 x 152 x  64 ->  152 x 152 x  64 0.189 BF
  16 conv     64       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x  64 1.703 BF
  17 Shortcut Layer: 14,  wt = 0, wn = 0, outputs: 152 x 152 x  64 0.001 BF
  18 conv     64       1 x 1/ 1    152 x 152 x  64 ->  152 x 152 x  64 0.189 BF
  19 conv     64       3 x 3/ 1    152 x 152 x  64 ->  152 x 152 x  64 1.703 BF
  20 Shortcut Layer: 17,  wt = 0, wn = 0, outputs: 152 x 152 x  64 0.001 BF
  21 conv     64       1 x 1/ 1    152 x 152 x  64 ->  152 x 152 x  64 0.189 BF
  22 route  21 12 	                           ->  152 x 152 x 128 
  23 conv    128       1 x 1/ 1    152 x 152 x 128 ->  152 x 152 x 128 0.757 BF
  24 conv    256       3 x 3/ 2    152 x 152 x 128 ->   76 x  76 x 256 3.407 BF
  25 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  26 route  24 		                           ->   76 x  76 x 256 
  27 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
  28 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  29 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  30 Shortcut Layer: 27,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  31 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  32 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  33 Shortcut Layer: 30,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  34 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  35 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  36 Shortcut Layer: 33,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  37 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  38 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  39 Shortcut Layer: 36,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  40 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  41 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  42 Shortcut Layer: 39,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  43 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  44 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  45 Shortcut Layer: 42,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  46 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  47 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  48 Shortcut Layer: 45,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  49 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  50 conv    128       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 128 1.703 BF
  51 Shortcut Layer: 48,  wt = 0, wn = 0, outputs:  76 x  76 x 128 0.001 BF
  52 conv    128       1 x 1/ 1     76 x  76 x 128 ->   76 x  76 x 128 0.189 BF
  53 route  52 25 	                           ->   76 x  76 x 256 
  54 conv    256       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 256 0.757 BF
  55 conv    512       3 x 3/ 2     76 x  76 x 256 ->   38 x  38 x 512 3.407 BF
  56 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  57 route  55 		                           ->   38 x  38 x 512 
  58 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
  59 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  60 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  61 Shortcut Layer: 58,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  62 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  63 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  64 Shortcut Layer: 61,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  65 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  66 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  67 Shortcut Layer: 64,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  68 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  69 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  70 Shortcut Layer: 67,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  71 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  72 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  73 Shortcut Layer: 70,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  74 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  75 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  76 Shortcut Layer: 73,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  77 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  78 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  79 Shortcut Layer: 76,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  80 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  81 conv    256       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 256 1.703 BF
  82 Shortcut Layer: 79,  wt = 0, wn = 0, outputs:  38 x  38 x 256 0.000 BF
  83 conv    256       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 256 0.189 BF
  84 route  83 56 	                           ->   38 x  38 x 512 
  85 conv    512       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 512 0.757 BF
  86 conv   1024       3 x 3/ 2     38 x  38 x 512 ->   19 x  19 x1024 3.407 BF
  87 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  88 route  86 		                           ->   19 x  19 x1024 
  89 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
  90 conv    512       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.189 BF
  91 conv    512       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x 512 1.703 BF
  92 Shortcut Layer: 89,  wt = 0, wn = 0, outputs:  19 x  19 x 512 0.000 BF
  93 conv    512       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.189 BF
  94 conv    512       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x 512 1.703 BF
  95 Shortcut Layer: 92,  wt = 0, wn = 0, outputs:  19 x  19 x 512 0.000 BF
  96 conv    512       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.189 BF
  97 conv    512       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x 512 1.703 BF
  98 Shortcut Layer: 95,  wt = 0, wn = 0, outputs:  19 x  19 x 512 0.000 BF
  99 conv    512       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.189 BF
 100 conv    512       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x 512 1.703 BF
 101 Shortcut Layer: 98,  wt = 0, wn = 0, outputs:  19 x  19 x 512 0.000 BF
 102 conv    512       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.189 BF
 103 route  102 87 	                           ->   19 x  19 x1024 
 104 conv   1024       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x1024 0.757 BF
 105 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 106 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 107 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 108 max                5x 5/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.005 BF
 109 route  107 		                           ->   19 x  19 x 512 
 110 max                9x 9/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.015 BF
 111 route  107 		                           ->   19 x  19 x 512 
 112 max               13x13/ 1     19 x  19 x 512 ->   19 x  19 x 512 0.031 BF
 113 route  112 110 108 107 	                   ->   19 x  19 x2048 
 114 conv    512       1 x 1/ 1     19 x  19 x2048 ->   19 x  19 x 512 0.757 BF
 115 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 116 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 117 conv    256       1 x 1/ 1     19 x  19 x 512 ->   19 x  19 x 256 0.095 BF
 118 upsample                 2x    19 x  19 x 256 ->   38 x  38 x 256
 119 route  85 		                           ->   38 x  38 x 512 
 120 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 121 route  120 118 	                           ->   38 x  38 x 512 
 122 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 123 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
 124 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 125 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
 126 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 127 conv    128       1 x 1/ 1     38 x  38 x 256 ->   38 x  38 x 128 0.095 BF
 128 upsample                 2x    38 x  38 x 128 ->   76 x  76 x 128
 129 route  54 		                           ->   76 x  76 x 256 
 130 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 131 route  130 128 	                           ->   76 x  76 x 256 
 132 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 133 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 134 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 135 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 136 conv    128       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 128 0.379 BF
 137 conv    256       3 x 3/ 1     76 x  76 x 128 ->   76 x  76 x 256 3.407 BF
 138 conv    255       1 x 1/ 1     76 x  76 x 256 ->   76 x  76 x 255 0.754 BF
 139 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.20
nms_kind: greedynms (1), beta = 0.600000 
 140 route  136 		                           ->   76 x  76 x 128 
 141 conv    256       3 x 3/ 2     76 x  76 x 128 ->   38 x  38 x 256 0.852 BF
 142 route  141 126 	                           ->   38 x  38 x 512 
 143 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 144 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
 145 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 146 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
 147 conv    256       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 256 0.379 BF
 148 conv    512       3 x 3/ 1     38 x  38 x 256 ->   38 x  38 x 512 3.407 BF
 149 conv    255       1 x 1/ 1     38 x  38 x 512 ->   38 x  38 x 255 0.377 BF
 150 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.10
nms_kind: greedynms (1), beta = 0.600000 
 151 route  147 		                           ->   38 x  38 x 256 
 152 conv    512       3 x 3/ 2     38 x  38 x 256 ->   19 x  19 x 512 0.852 BF
 153 route  152 116 	                           ->   19 x  19 x1024 
 154 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 155 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 156 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 157 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 158 conv    512       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 512 0.379 BF
 159 conv   1024       3 x 3/ 1     19 x  19 x 512 ->   19 x  19 x1024 3.407 BF
 160 conv    255       1 x 1/ 1     19 x  19 x1024 ->   19 x  19 x 255 0.189 BF
 161 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, obj_norm: 1.00, cls_norm: 1.00, delta_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000 
Total BFLOPS 128.459 
avg_outputs = 1068395 
 Allocate additional workspace_size = 52.44 MB 
Loading weights from /yolov4.weights...
 seen 64, trained: 32032 K-images (500 Kilo-batches_64) 
Done! Loaded 162 layers from weights-file 
 Detection layer: 139 - type = 28 
 Detection layer: 150 - type = 28 
 Detection layer: 161 - type = 28 
/dog.jpg: Predicted in 708.269000 milli-seconds.
bicycle: 92%
dog: 98%
truck: 92%
pottedplant: 33%
OpenCV exception: show_image_cv 
OpenCV exception: wait_key_cv 
OpenCV exception: destroy_all_windows_cv 
2.5.3.5-执行后会在文件夹生成“predictions.jpg”文件,由于容器中目前看不到图片,我将此文件从容器拷贝到宿主机进行查看
docker cp dc9f6a655775:/darknet-master/predictions.jpg ~
2.5.3.6-查看图片发现已经检测完毕,说明测试成功!在容器内终端输入exit可退出容器。

三、容器打包镜像、压缩镜像、转移镜像

1.我们退出容器后容器其实还在电脑上存在,可以使用docker ps -a查看ID

2.根据刚刚配置好的容器生成新镜像

2.1-停止当前容器

docker stop 容器ID

2.2-将容器打包

docker commit -a "heqingchun" -m "cuda_cudnn_opencv_darknet" dc9f6a655775 my_images:v1
其中-a后边是作者,-m后边是备注,dc9f6a655775为镜像ID,后边的my_images:v1是名称:标签
在打包之后可以使用docker images查看新的镜像,并且可以通过docker run -it my_images:v1运行新镜像

3.镜像的转移

3.1-将镜像压缩成tar包

docker save my_images:v1 > ~/my_docker_images.tar
执行上边命令就在我的家目录下生成了my_docker_images.tar文件,可以将该文件拷贝至U盘等用于转移

4.镜像加载

4.1-例如:使用U盘将压缩包拷贝到另外一台装有docker的电脑,将压缩包拷贝到家目录,并且cd到家目录

docker load < my_docker_images.tar
执行上边命令docker会自动解压文件并且添加到本地仓库,可以在新电脑使用docker images查看,
再次使用命令运行镜像
docker run -it --name test_name -v /data/test_name:/data/test_name --runtime=nvidia -e NVIDIA_VISIBLE_DEVICE=all my_images:v1

参考的博客在这里列出,非常之感谢。
系统安装:https://blog.csdn.net/baidu_36602427/article/details/86548203
显卡安装:https://blog.csdn.net/Perfect886/article/details/119109380
opencv4.4.0安装:https://blog.csdn.net/cloud_shen/article/details/107878654
更新软件源:https://blog.csdn.net/baidu_36602427/article/details/86551862
Docker安装:https://www.runoob.com/docker/ubuntu-docker-install.html
Docker镜像加速:https://blog.csdn.net/KEYMA/article/details/114118052
安装nvidia-docker、拉取nvidia/cuda镜像、运行镜像:https://blog.csdn.net/weixin_50008473/article/details/119464898
容器、镜像操作:https://www.runoob.com/docker/docker-command-manual.html
darknet编译、安装、测试:https://blog.51cto.com/u_11495341/3038915

总结

以上就是无痕丶Shadow对于“Ubuntu18.04.6下安装NVIDIA显卡驱动、docker、nvidia-docker;容器中编译安装opencv-4.4.0与darknet-yolov4并完成测试;容器封装镜像转移”等操作的介绍,希望能够给你带来灵感。第一次写博客,感觉还不错。
注:转发转载须注明地址,谢谢~

 类似资料: