Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE REQUEST] Add Batched faster-whisper #18

Closed
NickNaskida opened this issue Sep 8, 2024 · 3 comments
Closed

[FEATURE REQUEST] Add Batched faster-whisper #18

NickNaskida opened this issue Sep 8, 2024 · 3 comments

Comments

@NickNaskida
Copy link

Hey, thanks a lot for your work here! Have you considered using Batched faster-whisper?

As per docs:

Batched version improves the speed upto 10-12x compared to openAI implementation and 3-4x compared to the sequential faster_whisper version. It works by transcribing semantically meaningful audio chunks as batches leading to faster inference.

I am down to create a pull request adding this.
Let me know.

@thomasmol
Copy link
Owner

Yes I have seen it! its awesome. however it is not released yet officially as there are a few issues, e.g. SYSTRAN/faster-whisper#940 so i'll wait until an official new release to add this.

@NickNaskida
Copy link
Author

Cool, I'll deploy my own until then

@NickNaskida
Copy link
Author

NickNaskida commented Sep 10, 2024

If anyone needs it, it is here: https://replicate.com/nicknaskida/whisper-diarization

Closing this issue for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants