2.系统环境、编译与运行
优质
小牛编辑
131浏览
2023-12-01
Host
环境 | 规格 |
---|---|
操作系统 | Ubuntu 18.04 |
构建系统 | Catkin Build System |
CUDA(optional) | 10.2 |
TensorRT(optional) | 7.0.0 |
libtorch | cxx11版本 |
pytorch | 1.5 |
onnx | 1.1 |
netron(optional) | / |
Device
环境 | 规格 |
---|---|
操作系统 | ubuntu18.04 |
jetpack工具包 | JetPack 4.4 |
CUDA | 10.2 |
TensorRT | 7.0.0 |
OpenCV | 4.2 |
编译
首先安装相应依赖。建议使用ubuntu 18.04
sudo apt-get install -y ros-melodic-opencv3 \
ros-melodic-cv-bridge \
ros-melodic-image-transport \
ros-melodic-stage-ros \
ros-melodic-map-server \
ros-melodic-laser-geometry \
ros-melodic-interactive-markers \
ros-melodic-tf \
ros-melodic-pcl-* \
ros-melodic-libg2o \
ros-melodic-rplidar-ros \
ros-melodic-rviz \
protobuf-compiler \
libprotobuf-dev \
libsuitesparse-dev \
libgoogle-glog-dev \
Libtorch 安装 libtorch 是 pytorch 提供的 c++ 版本库, 基于caffe2 aten和c10,我们可以直接安装编译好的版本,需要注意的是libtorch 有两个版本,其一采用c11,另一个使用c11以前标准,这个版本可能不能和ros一起编译。因此需要注意一下。此处不要求帧率,所以就采用纯cpu推理
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.6.0%2Bcpu.zip
unzip libtorch.zip
sudo mv libtorch /usr/local/
CUDA 10.2安装 NVIDIA CUDA10.2
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
TensorRt 7.0.0 安装 NVIDIA TensorRT 7.0.0 直接在官方网站下载deb包,解压缩文件,完成安装。
使用说明
using SSD-detection
train
- set your dataset(voc) path which contains images, annotation directory in ./utils/setting_dict.py
"test" : {
"data_set" : ["/DJI/DJItest"],
"batch_size" : 2,
"transform" :
{
"PIXEL_MEAN" : [123, 117, 104],
"IMAGE_SIZE" : 512,
}
},
"train": {
"data_set" : ["/DJI/DJItrain/"],
"batch_size" : 8,
"transform" :
{
"PIXEL_MEAN" : [123, 117, 104],
"IMAGE_SIZE" : 512,
},
- run the train.py
python3 train.py
- (optional) specify the output path your can view the loss using tensorboard if you like , the loss will be stored in output directory
python3 train.py --out_dir {YOUR_PATH}
test
python3 test.py
finetune
python3 train.py --fine_tune 1 --pretrained_model "{PRETRAINED_MODEL_PATH}"
using RTS-deploy
rosrun ICRA-vision ICRA_vision