-
Notifications
You must be signed in to change notification settings - Fork 569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when starting: "llama_init_from_file: failed to load model" #18
Comments
Same error here. |
I should add I'm on an intel mac. The gpt4all chat's |
I'm running on Linux. Also was able to run gpt4all-lora-quantized-Linux-x86 |
I also tried to run this code: `from pyllamacpp.model import Model def new_text_callback(text: str): model = Model(ggml_model='./models/gpt4all-lora-quantized-ggml.bin', n_ctx=512) and got the same error, which points to an invalid file. |
Yes you are right. The model provided by Nomic-ai is not compatible with windows. I'll ask them to provide a working version soon. In the meanwhile you can download the model and convert it by yourself as presented in: Thanks for your understanding. |
A new version of the installer fixes this issue. |
I am using Ubuntu. The same error occurs. "./models/gpt4all-lora-quantized-ggml.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74]).......llama_init_from_file: failed to load model" |
Windows 10 Home, Intel, NVIDIA laptop. All recent github clone. Selected to download model using browser. Downloading was successful. bad magic error.
|
MacOSx Ventura Intel same error system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | |
For people using windows try running install.bat. The script downloads the right model, does the conversion and everything. |
Windows 10Conversion seem to fix this. "gpt4all-ui/convert.cmd"
Then just do the |
I fixed the error by just adding lines on run.sh fix the error if [ ! -d "tmp/llama.cpp" ] mv models/gpt4all-lora-quantized-ggml.bin models/gpt4all-lora-quantized-ggml.bin.original python tmp/llama.cpp/migrate-ggml-2023-03-30-pr613.py models/gpt4all-lora-quantized-ggml.bin.original models/gpt4all-lora-quantized-ggml.bin |
happy to create a new PR with this fix |
I'm getting an error when starting:
I have validated the md5sum matches:
The text was updated successfully, but these errors were encountered: