-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker error "Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory" #729
Comments
@cod3r0k , hello. Can you try again with:
I think you should use nvidia/cuda image version 12. Recently, ctranslate2 lib was upgraded to 4.0 to support cuda 12. |
Getting the same error even after installing libcudnn8. I'm using CUDA 12.2 |
@souvikqb , what version of ctranslate2 did you use? For CUDA 12.2, it should ctranslate2 version 4.0 |
I'm not explicitly installing ctranslate-2, I'm using the version downloaded from faster-whisper. |
Currently, I am facing an issue that is not resolved yet. It's working fine in non-docker, but in docker, it's not working. Can you please share the docker file that you have used to run it successfully? @trungkienbkhn |
@souvikqb Uhm, so what version of faster-whisper did you use ? (0.10.1 or latest version 1.0.1) |
I'm using the following in requirements.txt - But still getting the same error on CUDA 12.2 |
@cod3r0k , ok. My CUDA version in host machine is 12.0. Therefor I used image
infer.py import time
from faster_whisper import WhisperModel
jfk_path = 'jfk.flac'
model_path = 'tiny'
tic = time.time()
model = WhisperModel(model_path, device='cuda')
segments, info = model.transcribe(jfk_path, word_timestamps=True)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
print("Total time: ", time.time() - tic) Running:
Hope it's helpful for you. |
@souvikqb , could you run
Besides, I found another solution here. You can try it. |
Sure let me try |
I don't understand why there are issues with cuda and cudnn in |
@cod3r0k , what is the issue you are encountering? I replaced the devel image to the runtime image (nvidia/cuda:12.0.0-runtime-ubuntu20.04) and it still works as expected. |
This issue was resolved by installing wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
dpkg -i cuda-keyring_1.0-1_all.deb
apt update && apt upgrade
apt install libcudnn8 libcudnn8-dev |
Thank you! It works for me! |
it worked for me |
I will just remove it for now as I will eventually get a better and newer Ampere-series GPU. Thanks for all of the help though. |
Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729
Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729 Co-authored by: Icaro Bombonato <[email protected]>
Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729 Co-authored-by: Icaro Bombonato <[email protected]>
Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729 Co-authored-by: Icaro Bombonato <[email protected]>
Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729 Co-authored-by: Icaro Bombonato <[email protected]>
* local vad model * move model to assets * Remove typo in error message * Fix link in README.md * Added Romanian phoneme-based ASR model (m-bain#791) Co-authored-by: Barabazs <[email protected]> * feat: add new align models (m-bain#922) Co-authored-by: Barabazs <[email protected]> * feat: update Norwegian models (m-bain#687) Updated Norwegian Bokmål and Norwegian Nynorsk models Co-authored-by: Barabazs <[email protected]> * fix: Force ctranslate to version 4.4.0 Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729 Co-authored-by: Icaro Bombonato <[email protected]> * Update MANIFEST.in to include necessary files * chore: bump version * feat: update faster-whisper to 1.0.2 (m-bain#814) * Update faster-whisper to 1.0.2 to enable model distil-large-v3 * feat: add hotwords option to default_asr_options --------- Co-authored-by: Barabazs <[email protected]> * feat: add support for faster-whisper 1.0.3 (m-bain#875) --------- Co-authored-by: Barabazs <[email protected]> * feat: update versions for pyannote:3.3.2 and faster-whisper:1.1.0 (m-bain#936) * chore: bump faster-whisper to 1.1.0 * chore: bump pyannote to 3.3.2 * feat: add multilingual option in load_model function --------- Co-authored-by: Barabazs <[email protected]> * feat: add verbose output (m-bain#759) --------- Co-authored-by: Abhishek Sharma <[email protected]> Co-authored-by: Barabazs <[email protected]> * feat: add local_files_only option on whisperx.load_model for offline mode (m-bain#867) Adds the parameter local_files_only (default False for consistency) to whisperx.load_model so that the user can avoid downloading the file and return the path to the local cached file if it exists. --------- Co-authored-by: Barabazs <[email protected]> * feat: use model_dir as cache_dir for wav2vec2 (m-bain#681) * feat: add Python compatibility testing workflow feat: restrict Python versions to 3.9 - 3.12 * feat: add build and release workflow * chore: clean up MANIFEST.in by removing unnecessary asset inclusions * chore: update gitignore * fix: update README image source and enhance setup.py for long description * docs: update installation instructions in README * chore: update license in setup.py * fix: add UTF-8 encoding when reading README.md * chore: update ctranslate2 version to restrict <4.5.0 * chore: bump whisperX to 3.3.0 * fix: update import statement for conjunctions module * refactor: simplify imports for better type inference * refactor: add type hints * feat: include speaker information in WriteTXT when diarizing * refactor: replace NamedTuple with TranscriptionOptions in FasterWhisperPipeline --------- Co-authored-by: Max Bain <[email protected]> Co-authored-by: Max Bain <[email protected]> Co-authored-by: Alex Zamoshchin <[email protected]> Co-authored-by: Jim O’Regan <[email protected]> Co-authored-by: Ruhollah Majdoddin <[email protected]> Co-authored-by: Barabazs <[email protected]> Co-authored-by: Ismael Ruiz Ranz <[email protected]> Co-authored-by: pere <[email protected]> Co-authored-by: Icaro Bombonato <[email protected]> Co-authored-by: Frost Ming <[email protected]> Co-authored-by: moritzbrantner <[email protected]> Co-authored-by: Hasan Naseer <[email protected]> Co-authored-by: Abhishek Sharma <[email protected]> Co-authored-by: Abhishek Sharma <[email protected]> Co-authored-by: Roque Giordano <[email protected]> Co-authored-by: bnitsan <[email protected]> Co-authored-by: Philippe Anel <[email protected]>
Force ctranslate to version 4.4.0 due libcudnn_ops_infer.so.8: SYSTRAN/faster-whisper#729 Co-authored-by: Icaro Bombonato <[email protected]>
Hi. I have a container which has CUDA and cudnn (
nvidia/cuda:11.8.0-base-ubuntu22.04
).But when I want to use it in inference mode, I have a problem:
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
The text was updated successfully, but these errors were encountered: