This script is for fixing two problems with the output of the Whisper model:
-
Incorrect timestamps in the output.
-
Output stuck in a loop due to long segments of silence in the audio.
For fixing the first issue, I am using a slightly modified version of stable-whisper by jianfch.
For the second issue, I use pyannote audio to obtain non-silent audio segments, and feed those segments one after another into Whisper for transcription.
-
install Whisper
-
install pyannote audio
-
clone this repository
-
Go to VAD and accept user conditions. (You will have to create a Hugging Face account if you don't already have one)
-
Go to tokens and generate an access token.
-
Copy the generated token to a text file and name the text file "HuggingFaceToken.txt", and place the text file in the main folder of this repository.
Place audio files(.wav or .mp3) or video files(.mp4) in the folder named "to_process". The script will batch process all files contained in the foler.
run the file "transcription.py" to obtain transcriptions for all the files in the folder "to_process".
Example commands for "transcription.py":
python transcription.py -l ja -v -srt
Transcribe files in Japanese and write a subtitle file with the timestamps and show the transcriptions while processing.
python transcription.py -l ja -task translate -txt -srt
Translate files from Japanese to English and write the translations in a text and a subtitle file with the timestamps.
python transcription.py -h
For a full list of commands.