You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Batched version improves the speed upto 10-12x compared to openAI implementation and 3-4x compared to the sequential faster_whisper version. It works by transcribing semantically meaningful audio chunks as batches leading to faster inference.
I am down to create a pull request adding this.
Let me know.
The text was updated successfully, but these errors were encountered:
Yes I have seen it! its awesome. however it is not released yet officially as there are a few issues, e.g. SYSTRAN/faster-whisper#940 so i'll wait until an official new release to add this.
Hey, thanks a lot for your work here! Have you considered using Batched faster-whisper?
As per docs:
I am down to create a pull request adding this.
Let me know.
The text was updated successfully, but these errors were encountered: