Releases: simonw/llm
Releases · simonw/llm
0.9a1
0.9a0
0.8.1
0.8
- The output format for
llm logs
has changed. Previously it was JSON - it's now a much more readable Markdown format suitable for pasting into other documents. #160- The new
llm logs --json
option can be used to get the old JSON format. - Pass
llm logs --conversation ID
or--cid ID
to see the full logs for a specific conversation.
- The new
- You can now combine piped input and a prompt in a single command:
cat script.py | llm 'explain this code'
. This works even for models that do not support system prompts. #153 - Additional OpenAI-compatible models can now be configured with custom HTTP headers. This enables platforms such as openrouter.ai to be used with LLM, which can provide Claude access even without an Anthropic API key.
- Keys set in
keys.json
are now used in preference to environment variables. #158 - The documentation now includes a plugin directory listing all available plugins for LLM. #173
- New related tools section in the documentation describing
ttok
,strip-tags
andsymbex
. #111 - The
llm models
,llm aliases
andllm templates
commands now default to running the same command asllm models list
andllm aliases list
andllm templates list
. #167 - New
llm keys
(akallm keys list
) command for listing the names of all configured keys. #174 - Two new Python API functions,
llm.set_alias(alias, model_id)
andllm.remove_alias(alias)
can be used to configure aliases from within Python code. #154 - LLM is now compatible with both Pydantic 1 and Pydantic 2. This means you can install
llm
as a Python dependency in a project that depends on Pydantic 1 without running into dependency conflicts. Thanks, Chris Mungall. #147 llm.get_model(model_id)
is now documented as raisingllm.UnknownModelError
if the requested model does not exist. #155
0.7.1
0.7
The new Model aliases commands can be used to configure additional aliases for models, for example:
llm aliases set turbo gpt-3.5-turbo-16k
Now you can run the 16,000 token gpt-3.5-turbo-16k
model like this:
llm -m turbo 'An epic Greek-style saga about a cheesecake that builds a SQL database from scratch'
Use llm aliases list
to see a list of aliases and llm aliases remove turbo
to remove one again. #151
Notable new plugins
- llm-mlc can run local models released by the MLC project, including models that can take advantage of the GPU on Apple Silicon M1/M2 devices.
- llm-llama-cpp uses llama.cpp to run models published in the GGML format. See Run Llama 2 on your own Mac using LLM and Homebrew for more details.
Also in this release
- OpenAI models now have min and max validation on their floating point options. Thanks, Pavel Král. #115
- Fix for bug where
llm templates list
raised an error if a template had an empty prompt. Thanks, Sherwin Daganato. #132 - Fixed bug in
llm install --editable
option which prevented installation of.[test]
. #136 llm install --no-cache-dir
and--force-reinstall
options. #146
0.6.1
- LLM can now be installed directly from Homebrew core:
brew install llm
. #124 - Python API documentation now covers System prompts.
- Fixed incorrect example in the Prompt templates documentation. Thanks, Jorge Cabello. #125
0.6
- Models hosted on Replicate can now be accessed using the llm-replicate plugin, including the new Llama 2 model from Meta AI. More details here: Accessing Llama 2 from the command-line with the llm-replicate plugin.
- Model providers that expose an API that is compatible with the OpenAPI API format, including self-hosted model servers such as LocalAI, can now be accessed using additional configuration for the default OpenAI plugin. #106
- OpenAI models that are not yet supported by LLM can also be configured } using the new extra-openai-models.yaml` configuration file. #107
- The llm logs command now accepts a
-m model_id
option to filter logs to a specific model. Aliases can be used here in addition to model IDs. #108 - Logs now have a SQLite full-text search index against their prompts and responses, and the
llm logs -q SEARCH
option can be used to return logs that match a search term. #109
0.5
LLM now supports additional language models, thanks to a new plugins mechanism for installing additional models.
Plugins are available for 19 models in addition to the default OpenAI ones:
- llm-gpt4all adds support for 17 models that can download and run on your own device, including Vicuna, Falcon and wizardLM.
- llm-mpt30b adds support for the MPT-30B model, a 19GB download.
- llm-palm adds support for Google's PaLM 2 via the Google API.
A comprehensive tutorial, writing a plugin to support a new model describes how to add new models by building plugins in detail.
New features
- Python API documentation for using LLM models, including models from plugins, directly from Python. #75
- Messages are now logged to the database by default - no need to run the
llm init-db
command any more, which has been removed. Instead, you can toggle this behavior off usingllm logs off
or turn it on again usingllm logs on
. Thellm logs status
command shows the current status of the log database. If logging is turned off, passing--log
to thellm prompt
command will cause that prompt to be logged anyway. #98 - New database schema for logged messages, with
conversations
andresponses
tables. If you have previously used the oldlogs
table it will continue to exist but will no longer be written to. #91 - New
-o/--option name value
syntax for setting options for models, such as temperature. Available options differ for different models. #63 llm models list --options
command for viewing all available model options. #82llm "prompt" --save template
option for saving a prompt directly to a template. #55- Prompt templates can now specify default values for parameters. Thanks, Chris Mungall. #57
llm openai models
command to list all available OpenAI models from their API. #70llm models default MODEL_ID
to set a different model as the default to be used whenllm
is run without the-m/--model
option. #31
Smaller improvements
llm -s
is now a shortcut forllm --system
. #69llm -m 4-32k
alias forgpt-4-32k
.llm install -e directory
command for installing a plugin from a local directory.- The
LLM_USER_PATH
environment variable now controls the location of the directory in which LLM stores its data. This replaces the oldLLM_KEYS_PATH
andLLM_LOG_PATH
andLLM_TEMPLATES_PATH
variables. #76 - Documentation covering Utility functions for plugins.
- Documentation site now uses Plausible for analytics. #79
0.4.1
- LLM can now be installed using Homebrew:
brew install simonw/llm/llm
. #50 llm
is now styled LLM in the documentation. #45- Examples in documentation now include a copy button. #43
llm templates
command no longer has its display disrupted by newlines. #42llm templates
command now includes system prompt, if set. #44