-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: How to fix this error No module named 'llama_index.llms.llama_cpp' #12502
Comments
Hey @deecancode, great to see you diving back into the world of LlamaIndex! Running into a bit of trouble, I see? Let's see what we can do about that. Based on the error message you provided, it seems like the Python interpreter is unable to find the However, after reviewing the context provided, it appears that the issue is related to the import statement To resolve this issue, you should ensure that the raise ImportError(
"Could not import llama_cpp library."
"Please install llama_cpp with `pip install llama-cpp-python`."
"See the full installation guide for GPU support at "
"`https://github.com/abetlen/llama-cpp-python`"
) Therefore, the steps to resolve the issue are as follows:
If you follow these steps and ensure that the Regarding your question about the changes made in the LlamaIndex library from version 0.10.25 to 0.10.26 that could potentially affect the import of the 'llama_index.llms.llama_cpp' module, I wasn't able to find any specific changes within the repository that could have caused this issue. However, it's possible that there were changes made elsewhere that I missed. If the issue persists after following the steps above, please provide more information so I can further assist you. For more details, you can refer to the llama-index-legacy/llama_index/legacy/llms/llama_cpp.py file in the LlamaIndex repository. Sources
|
Tbh though, I would use Ollama, the setup is 1000x easier (similar install, |
i have the same issue and tried CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir but still it doesn't work. |
@sykuann that only installs llama cpp itself, you still need to install the integration (which I've shown above) |
Bug Description
Hi all,
I am trying to use mixtral-8x7b with my own data with no luck.
Version
0.10.25
Steps to Reproduce
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: