Skip to content

Releases: simonw/llm

0.9a1

04 Sep 01:28
Compare
Choose a tag to compare
0.9a1 Pre-release
Pre-release

0.9a0

02 Sep 17:54
1cd4596
Compare
Choose a tag to compare
0.9a0 Pre-release
Pre-release
  • Alpha release of embeddings support. #185

0.8.1

01 Sep 03:35
Compare
Choose a tag to compare
  • Fixed bug where first prompt would show an error if the io.datasette.llm directory had not yet been created. #193
  • Updated documentation to recommend a different llm-gpt4all model since the one we were using is no longer available. #195

0.8

21 Aug 06:55
Compare
Choose a tag to compare
0.8
  • The output format for llm logs has changed. Previously it was JSON - it's now a much more readable Markdown format suitable for pasting into other documents. #160
    • The new llm logs --json option can be used to get the old JSON format.
    • Pass llm logs --conversation ID or --cid ID to see the full logs for a specific conversation.
  • You can now combine piped input and a prompt in a single command: cat script.py | llm 'explain this code'. This works even for models that do not support system prompts. #153
  • Additional OpenAI-compatible models can now be configured with custom HTTP headers. This enables platforms such as openrouter.ai to be used with LLM, which can provide Claude access even without an Anthropic API key.
  • Keys set in keys.json are now used in preference to environment variables. #158
  • The documentation now includes a plugin directory listing all available plugins for LLM. #173
  • New related tools section in the documentation describing ttok, strip-tags and symbex. #111
  • The llm models, llm aliases and llm templates commands now default to running the same command as llm models list and llm aliases list and llm templates list. #167
  • New llm keys (aka llm keys list) command for listing the names of all configured keys. #174
  • Two new Python API functions, llm.set_alias(alias, model_id) and llm.remove_alias(alias) can be used to configure aliases from within Python code. #154
  • LLM is now compatible with both Pydantic 1 and Pydantic 2. This means you can install llm as a Python dependency in a project that depends on Pydantic 1 without running into dependency conflicts. Thanks, Chris Mungall. #147
  • llm.get_model(model_id) is now documented as raising llm.UnknownModelError if the requested model does not exist. #155

0.7.1

19 Aug 21:08
Compare
Choose a tag to compare
  • Fixed a bug where some users would see an AlterError: No such column: log.id error when attempting to use this tool, after upgrading to the latest sqlite-utils 3.35 release. #162

0.7

12 Aug 16:41
a802bfb
Compare
Choose a tag to compare
0.7

The new Model aliases commands can be used to configure additional aliases for models, for example:

llm aliases set turbo gpt-3.5-turbo-16k

Now you can run the 16,000 token gpt-3.5-turbo-16k model like this:

llm -m turbo 'An epic Greek-style saga about a cheesecake that builds a SQL database from scratch'

Use llm aliases list to see a list of aliases and llm aliases remove turbo to remove one again. #151

Notable new plugins

Also in this release

  • OpenAI models now have min and max validation on their floating point options. Thanks, Pavel Král. #115
  • Fix for bug where llm templates list raised an error if a template had an empty prompt. Thanks, Sherwin Daganato. #132
  • Fixed bug in llm install --editable option which prevented installation of .[test]. #136
  • llm install --no-cache-dir and --force-reinstall options. #146

0.6.1

24 Jul 15:55
ce21cbd
Compare
Choose a tag to compare
  • LLM can now be installed directly from Homebrew core: brew install llm. #124
  • Python API documentation now covers System prompts.
  • Fixed incorrect example in the Prompt templates documentation. Thanks, Jorge Cabello. #125

0.6

18 Jul 21:38
Compare
Choose a tag to compare
0.6
  • Models hosted on Replicate can now be accessed using the llm-replicate plugin, including the new Llama 2 model from Meta AI. More details here: Accessing Llama 2 from the command-line with the llm-replicate plugin.
  • Model providers that expose an API that is compatible with the OpenAPI API format, including self-hosted model servers such as LocalAI, can now be accessed using additional configuration for the default OpenAI plugin. #106
  • OpenAI models that are not yet supported by LLM can also be configured } using the new extra-openai-models.yaml` configuration file. #107
  • The llm logs command now accepts a -m model_id option to filter logs to a specific model. Aliases can be used here in addition to model IDs. #108
  • Logs now have a SQLite full-text search index against their prompts and responses, and the llm logs -q SEARCH option can be used to return logs that match a search term. #109

0.5

12 Jul 14:22
Compare
Choose a tag to compare
0.5

LLM now supports additional language models, thanks to a new plugins mechanism for installing additional models.

Plugins are available for 19 models in addition to the default OpenAI ones:

  • llm-gpt4all adds support for 17 models that can download and run on your own device, including Vicuna, Falcon and wizardLM.
  • llm-mpt30b adds support for the MPT-30B model, a 19GB download.
  • llm-palm adds support for Google's PaLM 2 via the Google API.

A comprehensive tutorial, writing a plugin to support a new model describes how to add new models by building plugins in detail.

New features

  • Python API documentation for using LLM models, including models from plugins, directly from Python. #75
  • Messages are now logged to the database by default - no need to run the llm init-db command any more, which has been removed. Instead, you can toggle this behavior off using llm logs off or turn it on again using llm logs on. The llm logs status command shows the current status of the log database. If logging is turned off, passing --log to the llm prompt command will cause that prompt to be logged anyway. #98
  • New database schema for logged messages, with conversations and responses tables. If you have previously used the old logs table it will continue to exist but will no longer be written to. #91
  • New -o/--option name value syntax for setting options for models, such as temperature. Available options differ for different models. #63
  • llm models list --options command for viewing all available model options. #82
  • llm "prompt" --save template option for saving a prompt directly to a template. #55
  • Prompt templates can now specify default values for parameters. Thanks, Chris Mungall. #57
  • llm openai models command to list all available OpenAI models from their API. #70
  • llm models default MODEL_ID to set a different model as the default to be used when llm is run without the -m/--model option. #31

Smaller improvements

  • llm -s is now a shortcut for llm --system. #69
  • llm -m 4-32k alias for gpt-4-32k.
  • llm install -e directory command for installing a plugin from a local directory.
  • The LLM_USER_PATH environment variable now controls the location of the directory in which LLM stores its data. This replaces the old LLM_KEYS_PATH and LLM_LOG_PATH and LLM_TEMPLATES_PATH variables. #76
  • Documentation covering Utility functions for plugins.
  • Documentation site now uses Plausible for analytics. #79

0.4.1

17 Jun 21:40
Compare
Choose a tag to compare
  • LLM can now be installed using Homebrew: brew install simonw/llm/llm. #50
  • llm is now styled LLM in the documentation. #45
  • Examples in documentation now include a copy button. #43
  • llm templates command no longer has its display disrupted by newlines. #42
  • llm templates command now includes system prompt, if set. #44