-
Notifications
You must be signed in to change notification settings - Fork 10.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix typo in default model path #1366
Conversation
examples/common.h
Outdated
@@ -39,7 +39,7 @@ struct gpt_params { | |||
float mirostat_tau = 5.00f; // target entropy | |||
float mirostat_eta = 0.10f; // learning rate | |||
|
|||
std::string model = "models/lamma-7B/ggml-model.bin"; // model path | |||
std::string model = "models/llama-7B/ggml-model.bin"; // model path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Readme mentions ./models/7B/ggml-model-q4_0.bin
and I think we should use this everywhere for unification.
std::string model = "models/llama-7B/ggml-model.bin"; // model path | |
std::string model = "models/7B/ggml-model-q4_0.bin"; // model path |
Can you also update this PR to remove params.model = "models/llama-7B/ggml-model.bin";
from files you mention? It doesn't make sense to override the value if it is the same as the default, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use models/7B
or models/llama-7B
as the canonical path? I used to have just 7B
but with all the new models supported by llama.cpp I'm now tending to prefer llama-7B
for disambiguation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to keep 7B because that's how the directory is named when you download the model.
Also the instructiond should work no matter what model is located in the 7B directory (Alpaca, Vicuna, etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've implemented your suggested changes. The default model path is now defined in examples/common.h
and this is used elsewhere implicitly, instead of repeating the string in several places.
…oadcasting for ggml_mul (#1483) * Broadcasting for ggml_mul * CUDA kernel for ggml_mul, norms in VRAM * GPU weights not in RAM, direct loading with cuFile * fixup! GPU weights not in RAM, direct loading with cuFile * fixup! GPU weights not in RAM, direct loading with cuFile * define default model path once, sync path with readme (#1366) * ~7% faster Q5_1 AVX2 code (#1477) * convert.py: Support models which are stored in a single pytorch_model.bin (#1469) * Support models in a single pytorch_model.bin * Remove spurious line with typo * benchmark-matmul: Print the average of the test results (#1490) * Remove unused n_parts parameter (#1509) * Fixes #1511 lambda issue for w64devkit (mingw) (#1513) * Fix for w64devkit and mingw * make kv_f16 the default for api users (#1517) * minor : fix compile warnings * readme : adds WizardLM to the list of supported models (#1485) * main : make reverse prompt option act as a stop token in non-interactive mode (#1032) * Make reverse prompt option act as a stop token in non-interactive scenarios * Making requested review changes * Update gpt_params_parse and fix a merge error * Revert "Update gpt_params_parse and fix a merge error" This reverts commit 2bb2ff1. * Update gpt_params_parse and fix a merge error take 2 * examples : add persistent chat (#1495) * examples : add persistent chat * examples : fix whitespace --------- Co-authored-by: Georgi Gerganov <[email protected]> * tests : add missing header * ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) * ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0 * llama : bump LLAMA_FILE_VERSION to 3 * cuda : update Q4 and Q8 dequantize kernels * ggml : fix AVX dot products * readme : update performance table + hot topics * ggml : fix scalar implementation of Q4_1 dot * llama : fix compile warnings in llama_set_state_data() * llama : fix name shadowing and C4146 (#1526) * Fix name shadowing and C4146 * Fix if macros not using defined when required * Update llama-util.h Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Update llama-util.h Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Code style Co-authored-by: Georgi Gerganov <[email protected]> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Georgi Gerganov <[email protected]> * Fix for mingw (#1462) * llama : add llama_init_backend() API (close #1527) * feature : add blis and other BLAS implementation support (#1502) * feature: add blis support * feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927 * fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake * Fix typo in INTEGER Co-authored-by: Georgi Gerganov <[email protected]> --------- Co-authored-by: Georgi Gerganov <[email protected]> * Revert "feature : add blis and other BLAS implementation support (#1502)" This reverts commit 07e9ace. * GPU weights not in RAM, direct loading with cuFile * llama : code style fixes + progress print fix * ggml : ggml_mul better broadcast support * cmake : workarounds for cufile when CMake version < 3.25 * gg rebase fixup * Loop in llama.cpp, fixed progress callback * Attempt clang-tidy fix * llama : fix vram size computation * Add forgotten fclose() --------- Co-authored-by: András Salamon <[email protected]> Co-authored-by: Ilya Kurdyukov <[email protected]> Co-authored-by: Tom Jobbins <[email protected]> Co-authored-by: rankaiyx <[email protected]> Co-authored-by: Stephan Walter <[email protected]> Co-authored-by: DannyDaemonic <[email protected]> Co-authored-by: Erik Scholz <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: David Kennedy <[email protected]> Co-authored-by: Jason McCartney <[email protected]> Co-authored-by: Evan Jones <[email protected]> Co-authored-by: Maxime <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Zenix <[email protected]>
Elsewhere in the code the default model path is
models/llama-7B/ggml-model.bin
; that string appears inThis change makes
./main -h
print the correct default path.It might make sense to refactor the code to reference the default path only once. This change makes that refactoring easier to do.