Skip to content

Latest commit

 

History

History
executable file
·
61 lines (41 loc) · 3.32 KB

File metadata and controls

executable file
·
61 lines (41 loc) · 3.32 KB

Monolingual subword-based ASR model for Indonesian

Basic info

This model is built upon Conformer architecture and trained using the CTC (Connectionist Temporal Classification) approach. The training dataset consists of 1 hour of Indonesian speech data that is randomly selected from 20 hours Indonesian dataset sourced from the publicly available Common Voice 11.0

Training process

The script run.sh contains the overall model training process.

Stage 0: Data preparation

  • Follow the steps data_prep.md and run data_prep.sh to prepare the datset and word list for a given language. The second and fourth stages of data_prep.sh involve language-specific special processing, which are detailed in the lang_process.md.
  • The detailed model parameters are detailed in config.json and hyper-p.json. Dataset paths should be added to the metainfo.json for efficient management of datasets.

Stage 1 to 3: Model training

  • The training of this model utilized 1 NVIDIA GeForce RTX 3090 GPUs and took 10 hours.

    • # of parameters (million): 89.98
    • GPU info
      • NVIDIA GeForce RTX 3090
      • # of GPUs: 1
  • To train the model:

      `bash run.sh id exp/Monolingual/id/Mono._subword_1h --sta 1 --sto 3`
    
  • To plot the training curves:

      `python utils/plot_tb.py exp/Monolingual/id/Mono._subword_1h/log/tensorboard/file -o exp/Monolingual/id/Mono._subword_1h/monitor.png`
    
Monitor figure
tb-plot

Stage 4: CTC decoding

  • To decode with CTC and calculate the %PER:

      `bash run.sh id exp/Monolingual/id/Mono._subword_1h --sta 4 --sto 4`
    
    %PER
    test_id %SER 100.00 | %WER 96.62 [ 20952 / 21685, 0 ins, 18067 del, 2885 sub ]
    
    

Stage 5 to 7: FST decoding

  • For FST decoding, config.json and hyper-p.json are needed to train language model. Notice the distinction between the profiles for training the ASR model and the profiles for training the language model, which have the same name but are in different directories.

  • To decode with FST and calculate the %WER:

      `bash run.sh id exp/Monolingual/id/Mono._subword_1h --mode subword --sta 5`
    
    %WER
    test_id_ac1.0_lm0.5_wip0.0.hyp  %SER 100.00 | %WER 96.42 [ 20908 / 21685, 0 ins, 18067 del, 2841 sub ]
    
    
    

Resources