当前位置: 首页 > 工具软件 > EGS > 使用案例 >

Kaldi 使用,egs下通用样例及功能小结

张森
2023-12-01

样例表

名词解释:

egs下的样例数据源,功能用到的相关工具
aidatatang_200zh/s5数据堂200h中文开源数据,用于语音识别LM+MFCC+Mono+Triphone(tri1:deltas;tri2:delta+delta-delta;tri3a:lda+mllt)+fMLLR+SAT+TDNN
aishell/v1openslr33数据 ,声纹识别MFCC+UBM+PLDA
aishell/s5openslr33数据 ,语音识别LM+MFCC+Mono+Triphone+fMLLR+SAT+TDNN
aishell2/s5aishell2,语音识别LM + GMM-HMM(MFCC+Mono+Triphone)+TDNN
ami/s5/run_ihm.sh----,语音识别IHM(independent headset microphone): LM+MFCC+Mono+Triphone+tri4a(LDA+MLLT+SAT)+DNN+TDNN;
ami/s5/run_mdm.sh----,语音识别MDM(multiple distant microphone): LM+MFCC+Mono+Triphone+SAT+MMI+DNN(dnn+lad+mllt)+TDNN;
ami/s5/run_sdm.sh----,语音识别SDM(single distant microphone): LM+MFCC+Mono+Triphone+SAT+MMI+DNN(dnn+lad+mllt)+TDNN
ami/s5b----,语音识别LM+MFCC+tri1(deltas)+tri2(lda+mllt)+tri3(lda+mllt+sat)+tdnn
an4/s5AN4,语音识别LM+MFCC+tri1(deltas)+tri2(lda+mllt)+tri3(lda+mllt+sat)
apiai_decode/s516Hz数据,只有解码,没有训练模型
aspire/s5corpora3/LDC/LDC2005T19,corpora3/LDC/LDC2004S13,corpora3/LDC/LDC2005S13,语音识别LM+MFCC+CMVN+Mono+Triphone+fMLLR+SAT+build_silprob.sh+TDNN+TDNN_SLTM
aurora4/s5corpora5/LDC/LDC93S6B,corpora5/AURORA,语音识别MFCC+tri1(deltas)+tri2(deltas)+tri2b(lda_mllt)+tri3b(lda+mllt+sat)+TDNN
babel/s5 run有点多,挑有特点的写,plp+pitch+feats+(ffv)+mono+tri1+tri2+tri3(deltas)+tri4(lda_mllt)+sat+SGMM(fmllr+ubm+sgmm)+MMI
bentham/v1/run_end2end.shcorpora5/handwriting_ocr/hwr1/ICDAR-HTR-Competition-2015,图像识别,OCR识别,端到端识别features+cmvn+lm+e2e_cnn
bn_music_speech/v1/corpora5/LDC/LDC97S44,corpora/LDC/LDC97T22,音乐语音识别MFCC+UBM+vad_GMM
callhome_diarization/v1swbd,家庭电话的声纹识别MFCC+VAD+UBM+PLDA+Cluster
callhome_diarization/v2/swbd,家庭电话的声纹识别xvector+vad+数据增强+mfcc+plda+cluster+diag(ubm)+VB
callhome_egyptian/s5略,语音识别mfcc+cmvn+mono+Triphone+sat+fmllr+tdnn
casia_hwdb/v1corpora5/handwriting_ocr/CASIA_HWDB/Offline,端到端语音识别
chime1-6略,语音识别 
cifar/v1cifar,图像识别
cmu_cslu_kids/s5略,语音识别LM+MFCC+CMVN+Mono+Triphone+MMI+Boosting+MPE+SAT+VTLN+tdnnf
cnceleb/v1CN-Celeb dataset,声纹识别MFCC+UBM+PLDA
commonvoice/s5corpus v1,语音识别LM+MFCC+Mono+Triphone+fmllr+tdnn
csj/s5日语语料库,语音识别LM+MFCC+CMVN+GMM-HMM+fmllr+(sgmm, tdnn, dnn, rnnlm等)
dihard_2018/v1略,声纹识别MFCC+UBM+PLDA+Cluster
dihard_2018/v2略,声纹识别MFCC+数据增强+cmvn+xvector+plda+cluster
egs/fame弗里斯兰人语料库,语音识别s5,声纹识别v1+v2s5: mfcc+cmvn+mono+triphone+sgmm+dnn+dnn_fbank;v1:常规操作,略;v2:引入了ubm+dnn
farsdat/s5波斯语语料库,语音识别MFCC+CMVN+Mono+tri1(deltas + delta-deltas)+tri2(LDA + MLLT)+tri3(LDA + MLLT + SAT)+SGMM+MMI + SGMM2
fisher_callhome_spanish/s5西班牙语语料库,语音识别MFCC+CMVN+Mono+deltas+deltas+lda_mllt+fmllr+sgmm+mmi+tdnn_1g
fisher_english/s5Fisher-English corpus,语音识别MFCC+CMVN+deltas+deltas+lda_mllt+fmllr+sat
fisher_swbd/s5SWBD语料库,语音识别lm+mfcc+cmvn+mono+delta+delta+delta+lda_mllt+fmllr+sat+lmresocre
formosa/s5台湾话,语音识别lm+mfcc+pitch+cmvn+mono+delta+delta+lda_mllt+fmllr+sat+tdnn
gale_arabic阿拉伯语语料库,语音识别s5:lm+mfcc+cmvn+mono+delta+delta+lda_mllt+sat+fmllr+mmi+sgmm+dnn, s5b:lm+mfcc+cmvn+mono+delta+lad_mllt+sat+fmllr+tdnn, s5c:lm+mfcc+mono+delta+lda_mllt+sat+fmllr+tdnn, s5d:lm+mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+tdnn+tdnn_lstm
gale_mandarin/s5中文普通话语料库,语音识别lm+mfcc+cmvn+mono+delta+lad_mllt+MMI+MPE+sat+fmllr+UBM+sgmm
gop/s5略,google的电话评分
gp三个语种,每个语种15-20h,多语种语音识别
heroico/s5西班牙语,语音识别lm+mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+tdnn
hi_mia/v1openslr,唤醒词识别
hkust/s5湖南方言,语音识别lm+mfcc+cmvn+mono+delta+delta+lda_mllt+fmllr+sat+nnet2_ms+tdnn+tdnn
hub4_english/s5English Broadcast News (HUB4) corpus,语音识别lm+mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr
hub4_spanish/s5西班牙语,语音识别lm+mfcc+cmvn+mono+delta+delta+delta+lda_mllt+sat+fmllr
iam手写数据,图像识别
iban马来西亚语,语音识别lm+mfcc+cmvn+mono+delta+lmrescore+delta+lmrescore+lda_mllt+lmrescore+sat+fmllr+ubm+sgmm+lmrescore(特色是每次decode都会用lmrescore)
ifnenit手写数据,图像识别
librispeech/s5英语lm+mfcc+cmvn+mono+deltas+lmrescore+lda_mllt+lmrescore+sat+fmllr+tdnn(除了没有数据增强,其他比较齐全了)
lre/v1----,语种识别mfcc+vad+ubm+vtln+ivector
lre07/v1----,语种识别v1:vtln+mfcc+ubm+ivector, v2:vtln+mfcc+ubm+ivector_dnn+dnn
madcat_ar,madcat_zh手写数据,图像文字识别
malach/s5MALACH data,语音识别mfcc+cmvn+lda_mllt+sat+fmllr+tdnn
mandarin_bn_bc/s5LDC,语音识别lm+mfcc+pitch+cmvn+mono+delta+lad_mllt+sat+fmllr+tdnn+dtnn_lstm
material/s5斯瓦希里语,语音识别lm+mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+lm修改
mgb2_arabic/s5MGB-2 corpus,语音识别lm+mfcc+cmvn+mono+delta+delta+lad_mllt+sat+fmllr+dnn
mgb5/s5MGB-5 corpuslm+mfcc+cmvn+mono+delta+delta+lda_mllt+sat+fmllr+sgmm+tdnn
mini_librispeech/s5openslr 31,语音识别lm+mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+lmrescore+tdnn
mobvoi/v1mobvoi提供的数据,语音识别数据增强+mfcc+cmvn+tdnn
mobvoihotwords/v1略,语音识别数据增强+mfcc+cmvn+fmllr+tdnn
multi_cn/s5中文(openslr),语音识别lm+mfcc+pitch+cmvn+mono+delta+delta+lda_mllt+sat+fmllr+cnn_tdnn
multi_en/s5英语,语音识别lm+mfcc+cmvn+mono+delta+delta+delta+lda_mllt+fmllr+sat
ptb/s5Penn Treebank corpus,lm建模
reverb/s5----,带混响的语音识别mfcc+cmvn+mono+delta+lad_mllt+sat+fmllr+tdnn
rimes/v1French handwriting,图片文字识别
rm/s5语音识别(dan的ppt上讲语音识别流程用的例子)mfcc+plp+cmvn+mono+delta+lda_mllt+denlats+mmi+mpe+sat+fmllr+ubm+mmi_fmmi+sgmm2+tdnn+tdnn_online_cmn
sitw数据,真实环境中的说话人识别v1:mfcc+vad+ubm+ivector+数据增强+lda+plda, v2:mfcc+vad+数据增强+xvector+lda+plda
snips/v1唤醒词,语音识别mfcc+cmvn+数据增强+mfcc+cmvn+mono+fmllr+tdnn
spanish_dimex100/s5墨西哥西班牙语,语音识别mfcc+cmvn+mono+delta+lda_mllt+denlats+mm
sprakbanken/s5丹麦语,语音识别mfcc+cmvn+irstlm+mono+delta+delta+lda_mllt+sat+fmllr+tdnn_lstm
sprakbanken_swe/s5瑞典语,语音识别mfcc+cmvn+irstlm+mono+delta+delta+lda_mllt+sat+fmllr+local/sprak_run_nnet_cpu.sh
sre08/v1LDC2011S05,声纹识别mfcc+vad+ubm+ivector+lda+plda
sre10NIST SRE 2010 ,声纹识别v1:mfcc+vad+ubm+ivector+plda, v2:mfcc+vad+ubm+ivector_dnn+plda
sre16NIST SRE 2016 enroll,声纹识别v1:mfcc+vad+ubm+ivector+数据增强+mfcc+ivector+plda, v2:mfcc+vad+数据增强+mfcc+cmvn+xvector+plda
svhn/v1Street View House Numbers,图像识别
swahili/s5斯瓦希里语语音语料库,语音识别mfcc+cmvn+mono+delta+lad_mllt+sat+fmllr+denlats+mmi+ubm+mmi_fmmi+ubm+sgmm+denlats_sgmm+mmi_sgmm
swbdSwitchboard corpus,Fisher corpus,语音识别s5:mfcc+cmvn+mono+delta+delta+lda_mllt+fmllr+sgmm+sat+fmllr+denlats+mmi+ubm+mmi_fmmi, s5b:mfcc+cmvn+mono+delta+delta+lda_mllt+fmllr+sat+fmllr+denlats+mmi+ubm+mmi_fmmi, s5c:mfcc+cmvn+mono+delta+delta+lda_mllt+fmllr+lmrescore+mmi+ubm+mmi_fmmi+lmrescore
tedlium----,语音识别s5:mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+denlats+mmi+dnn, s5_r2:mfcc+cmvn+mono+delta+lmscore+lda_mllt+sat+fmllr+tdnn, s5_r2_wsj:mfcc+cmvn+mono+delta+lad_mllt+sat+fmllr, s5_r3:mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+tdnn
thchs30/s5中文,语音识别mfcc+cmvn+lm+mono+delta+lda_mllt+sat+fmllr+quick+dnn
tidigits/s5LDC93S10,英文数字语音识别mfcc+cmvn+mono+delta
timit/s5LDC93S1,语音识别mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+ubm+sgmm+mmi_sgmm+dnn
tunisian_msa/s5突尼斯语料库,语音识别mfcc+cmvn+mono+lda_mllt+sat+fmllr+tdnn
uw3/v1----,图像识别
voxcelebVoxCeleb1 and VoxCeleb2 corpora,声纹识别v1:mfcc+vad+ubm+ivector+lad+plda, v2:mfcc+vad+数据增强+cmvn+xvector+lda+plda
voxforge/s5可以从voxforge得到免费语音库,语音识别mfcc+cmvn+mono+delta+delta+lda_mllt+denlats+mmi+mpe+sat+fmllr+ubm+mmi_fmmi+sgmm
vystadial_cz捷克语,语音识别s5:mfcc+cmvn+mono+delta+delta+lda_mllt+denlats+mmi, s5b:mfcc+cmvn+mono+delta+lda_mllt+sat+fmllr+tdnn
vystadial_en/s5英语,语音识别mfcc+cmvn+mono+delta+delta+lda_mllt+denlats+mmi+mpe
wsj/s5华尔街日报数据,语音识别mfcc+cmvn+mono+delta+lmrescore+lda_mllt+lmrescore+sat+fmllr+tdnn
yesno/s5yesno数据,语音识别mfcc+cmvn+mono
yomdle_fa, yomdle_korean, yomdle_russian, yomdle_tamil, yomdle_zhOCR数据,图像识别
zeroth_korean/s5韩语,语音识别

mfcc+cmvn+mono+delta+lmrescore+lda_mllt+sat+fmllr+rebulidlm+lmrescore+fmllr+sat+tdnn

 

LM:语言模型
MFCC:Mel频谱特征
CMVN:倒谱均值方差归一化
Mono:Mono phon,单音素模型训练
Triphone:三音素模型训练,一般 tri1: deltas; tri2: delta+delta-delta; tri3a: lda+mllt
GMM:高斯混合模型
HMM:隐马尔可夫
sGMM:子空间高斯混合模型(subspace GMM),可有效减少GMM参数
GMM-HMM:MFCC+Mono+Triphone
MLLT:最大似然线性变换
CMLLR/fMLLR:约束最大似然线性回归/特征空间最大似然线性回归(feature-space maximum likelihood linear regression),针对说话人特征的鲁棒性
SAT:说话人自适应
VTLN:Vocal Tract Length Normalisation,声道长度归一化。主要用于语音识别,消除男,女的声道长度的差异。在HTK中有源码,HTK book中有介绍。修改了MEL频率中的中心频率。
LDA:线性判别分析
PLDA:概率线性判别分析
CE:帧错误率(一般默认)
MMI/BMMI:最小化句子错误率,steps/train_mmi.sh
MPE:最小化各种粒度指标的错误率,steps/train_mpe.sh
sMBR:最小化状态错误率
lattice:词格,lmrescore会用到

 

脚本解释:

 

脚本名称作用
utils/subset_data_dir.sh分割数据,用于建立初始小模型,而后一步一步扩充
steps/train_mono.sh单音素模型训练
steps/align.sh, steps/align_si.sh, steps/align_fmllr.sh强制对齐
steps/train_sat.sh说话人自适应,一般之后跟fmllr,第一个sat前用si或者fmllr,sat一般用两轮
steps/get_prons.sh从训练数据中计算发音和静音概率,并重新创建lang目录,样例参见fisher_swbd/s5
steps/make_plp_pitch.sh提取plp和pitch特征
steps/make_plp.sh提取plp特征
utils/fix_data_dir.sh数据规整
steps/make_fbank.sh提取fbank特征,一般与local/nnet/run_dnn_fbank.sh组合使用
steps/make_mfcc.sh提取MFCC特征,相较于fbank有损失
steps/compute_cmvn_stats.shcmvn,提取倒谱特征,语音识别时用
local/train_irstlm.sh建lm的一个工具包
local/nnet3/xvector/prepare_feats.shcmvn,倒谱归一化,声纹识别时用
steps/align_fmllr.shfmllr对齐
steps/train_mmi.sh句错误率最小化训练
steps/train_mpe.sh字错误率(最小颗粒度)去训练
sid/train_diag_ubm.sh, sid/train_full_ubm.sh, steps/train_ubm.shubm训练
steps/train_sgmm2.sh,steps/align_sgmm2.sh,steps/make_denlats_sgmm2.shsgmm训练
sid/compute_vad_decision_gmm.shCompute energy based VAD output
sid/compute_vad_decision.sh利用能量提取有效音频段
local/run_lmrescore.sh利用RNN对LM重新打分
local/run_wpe.sh, local/run_beamformit.sh麦克风阵列相关处理,用于数据增强,代码在chime5/s5b/run.sh中。此外,run.sh中还有加噪,混响相关代码
steps/data/reverberate_data_dir.py, steps/data/augment_data_dir.py加噪,加混响相关操作,用于数据增强
chime6/s5_track2/local/train_diarizer.sh训练xvector dnn
local/vtln.sh用于消除男女声道长度差异
local/chain/run_tdnnf.sh,local/chain/run_tdnn.shtdnn训练脚本,tdnnf比tdnn两层中间多了层维数较低的中间层
local/nnet3/run_tdnn.shnnet3 TDNN
local/chain/run_tdnn_1g.sh与tdnn_1f类似,但做了一些调整,样例在fisher_callhome_spanish/s5中
steps/train_deltas.sh一般在tri1,也会在tri2,tri3
steps/train_lda_mllt.shLDA+MLLT,一般在tri2,tri3,tri2b,tri3b,看个人喜好命名
steps/train_quick.sh在现有特征的基础上训练模型(不进行任何类型的特征空间学习)
local/run_sgmm2.shSGMM训练
local/nnet/run_dnn.shDNN训练
local/online/run_nnet2_ms.sh 
local/csj_run_rnnlm.sh日语重打分RNNLM训练
diarization/vad_to_segments.sh音频做vad
diarization/score_plda.sh, diarization/cluster.shplda打分,根据打分分类,合并重复说话人。一般说话人id不明确的时候用
local/nnet3/xvector/prepare_feats_for_egs.sh, local/nnet3/xvector/run_xvector.sh, sid/nnet3/xvector/extract_xvectors.shCMVN,提取xvector特征
ivector-mean, ivector-compute-lda, ivector-compute-pldalda和plda训练
ivector-plda-scoringplda打分
sid/train_diag_ubm.sh, sid/train_full_ubm.sh, sid/train_ivector_extractor.sh一般提取ivector,例子可见fame/v1
sid/init_full_ubm_from_dnn.sh, sid/train_ivector_extractor_dnn.sh, sid/extract_ivectors_dnn.sh用dnn提取相关ivector特征,例子可见fame/v2
copy-feats查看ark文件,一般文件合并时用

 

小结

语音识别

数据增强:加噪,加音乐,加混响,速度扰动,SpecAugment()
特征提取:MFCC,pitch,CMVN,fbank,ubm
ASR训练:mono+triphone+tdnn,其中triphone会有变化(deltas,LDA,MLLT,fMLLR,SGMM等),tdnn会被替换成其他
训练策略:CE,MMI/BMM,MPE,sMBR
LM:先用较小LM,而后decode的时候用RNNLM进行重打分(主要是为了节省时间),当然可以直接用完整的LM,只是比较费时。
ASR:一般训练是把数据拆分train(训练集),dev(开发集),test(测试集)。一般调参是根据dev结果进行调参。此外,也会把train拆分成多个,在训练过程中不断增加数据,增加参数。

声纹识别

若没有segment,则需要先做一步vad,以去除静音段
特征提取:ivector,xvector
训练:ubm,lda/plda,cluster

 类似资料: