-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do i use convert-unversioned-ggml-to-ggml.py? #808
Comments
the same here python3 convert-unversioned-ggml-to-ggml.py models/ggml-alpaca-7b-q4.bin models/ggml-alpaca-7b-q4-new.binTraceback (most recent call last): python3 migrate-ggml-2023-03-30-pr613.py models/7B/ggml-alpaca-7b-q4.bin models/7B/ggml-alpaca-7b-q4-new.binTraceback (most recent call last): also maybe its can be already converted model somewhere exist? |
same ValueError: read length must be non-negative or -1 |
+1 |
I found out that problem was that you need numpy module |
@Shreyas-ITB I had the same confusion as you, since the readme doesn't explain all the conversion steps, but the error response you received does identify the next step to follow. Run the suggested script, for example:
Then you can use the converted model, for example:
|
@Triptolemus i just run your script but it has 1 error, do you have any solution ? |
For me the problem was that I ran migrate-ggml-2023-03-30-pr613.py on an old model without running convert-unversioned-ggml-to-ggml.py on it first. Once I did that everything worked fine |
@vinitran Sorry, my previous comment had the wrong model names in the suggested command, but I've edited it to correct that. To confirm, did you first run the convert-unversioned-ggml-to-ggml.py script, then run the migrate-ggml-2023-03-30-pr613.py script on the model outputted as a result of the first script? |
Yup |
python3 convert-unversioned-ggml-to-ggml.py /models ggml-alpaca-q4.bin |
python3 convert-unversioned-ggml-to-ggml.py models/ ggml-alpaca-q4.bin |
try the new |
* fix repeated greeting * remove seperator between role and message
Hi it told me to use the convert-unversioned-ggml-to-ggml.py file and gave me an error saying your gpt4all model is too old. So i converted the gpt4all-lora-unfiltered-quantized.bin file with llama tokenizer. And it generated some kind of orig file in the same directory where the model was. When i tried to run the miku.sh file which had the latest generated file as model it gave me another error stating this
main: seed = 1680783525 llama_model_load: loading model from './models/gpt4all-7B/gpt4all-lora-unfiltered-quantized.bin' - please wait ... ./models/gpt4all-7B/gpt4all-lora-unfiltered-quantized.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74]) you most likely need to regenerate your ggml files the benefit is you'll get 10-100x faster load times see https://github.com/ggerganov/llama.cpp/issues/91 use convert-pth-to-ggml.py to regenerate from original pth use migrate-ggml-2023-03-30-pr613.py if you deleted originals llama_init_from_file: failed to load model main: error: failed to load model './models/gpt4all-7B/gpt4all-lora-unfiltered-quantized.bin'
How do i use the conversion? did i do something wrong?
The text was updated successfully, but these errors were encountered: