当前位置: 首页 > 工具软件 > DeepSpeech > 使用案例 >

Paddlpaddle+DeepSpeech2自动语音识别部署

杭志泽
2023-12-01

Paddlpaddle+DeepSpeech2自动语音识别部署

背景

​ 语音识别

环境

  1. DeepSpeech2
  2. Paddlpaddle1.8.5
  3. Python 2.7
  4. Nvidia-docker
  5. ubuntu1~18.04

安装与配置

可以不使用nvidia-docker,直接跳到第五步

1.首先安装nvidia-docker
curl https://get.docker.com | sh
sudo systemctl start docker && sudo systemctl enable docker
# 设置stable存储库和GPG密钥:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

# 要访问experimental诸如WSL上的CUDA或A100上的新MIG功能之类的功能,您可能需要将experimental分支添加到存储库列表中.
# 可加可不加
curl -s -L https://nvidia.github.io/nvidia-container-runtime/experimental/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list

# nvidia-docker2更新软件包清单后,安装软件包(和依赖项):
sudo apt-get update

sudo apt-get install -y nvidia-docker2

# 设置默认运行时后,重新启动Docker守护程序以完成安装:
sudo systemctl restart docker
2.下载Docker镜像
sudo nvidia-docker pull hub.baidubce.com/paddlepaddle/deep_speech_fluid:latest-gpu
3.下载资源库
git clone https://github.com/PaddlePaddle/DeepSpeech.git
4.运行Docker镜像
#/home/aiuser/test/nvidia-docker 是你的当前目录 我试过 ${pwd} 但是 echo ${pwd} 是空的 但是 pwd有显示
sudo nvidia-docker run -it -p 0.0.0.0:8086:8086 -v /home/aiuser/test/nvidia-docker/DeepSpeech:/DeepSpeech hub.baidubce.com/paddlepaddle/deep_speech_fluid:latest-gpu /bin/bash
5.安装依赖库
apt-get install -y pkg-config libflac-dev libogg-dev libvorbis-dev libboost-dev swig python-dev
#之前安装过的可以跳过
git clone https://github.com/PaddlePaddle/DeepSpeech.git
cd DeepSpeech
sh setup.sh

6.安装paddlepaddle

#python2.7
python -m pip install paddlepaddle-gpu==1.8.5.post107 -i https://mirror.baidu.com/pypi/simple

开始

完整训练可以参考这个页面,这里直接使用预训练模型

https://github.com/PaddlePaddle/DeepSpeech/blob/develop/README_cn.md#%E5%BC%80%E5%A7%8B

cd DeepSpeech
#下载预训练模型
bash models/aishell/download_model.sh

#启动演示服务
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--host_ip 0.0.0.0 \
--host_port 8086

#Aishell
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--host_ip 0.0.0.0 \
--host_port 8086
--num_samples=10 \
--beam_size=300 \
--num_proc_bsearch=8 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=1024 \
--alpha=2.6 \
--beta=5.0 \
--cutoff_prob=0.99 \
--cutoff_top_n=40 \
--use_gru=True \
--use_gpu=True \
--share_rnn_weights=False \
--infer_manifest='models/aishell/manifest.test' \
--mean_std_path='models/aishell/mean_std.npz' \
--vocab_path='models/aishell/vocab.txt' \
--model_path='models/aishell' \
--lang_model_path='models/lm/zh_giga.no_cna_cmn.prune01244.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='cer' \
--specgram_type='linear'

#baidu
CUDA_VISIBLE_DEVICES=0 \
python deploy/demo_server.py \
--host_ip 0.0.0.0 \
--host_port 8086
--num_samples=10 \
--beam_size=300 \
--num_proc_bsearch=8 \
--num_conv_layers=2 \
--num_rnn_layers=3 \
--rnn_layer_size=1024 \
--alpha=2.6 \
--beta=5.0 \
--cutoff_prob=0.99 \
--cutoff_top_n=40 \
--use_gru=True \
--use_gpu=True \
--share_rnn_weights=False \
--infer_manifest='models/baidu_en8k/manifest.test' \
--mean_std_path='models/baidu_en8k/mean_std.npz' \
--vocab_path='models/baidu_en8k/vocab.txt' \
--model_path='models/baidu_en8k' \
--lang_model_path='models/lm/zh_giga.no_cna_cmn.prune01244.klm' \
--decoding_method='ctc_beam_search' \
--error_rate_type='cer' \
--specgram_type='linear'

遇到的问题

1.启动docker报错

docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver/library version mismatch: unknown.

是因为驱动不一致导致,重装显卡驱动

2.启动docker报错

nvidia-docker 没有注册 docker: Error response from daemon: Unknown runtime specified nvidia.

Systemd drop-in file

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo tee /etc/systemd/system/docker.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

Daemon configuration file

sudo tee /etc/docker/daemon.json <<EOF
{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOF
sudo pkill -SIGHUP dockerd
 类似资料: