Releases: snexus/llm-search
Releases · snexus/llm-search
v0.4.61
- Bug fix - use full path rather than file name when creating a mapping between document name and its hash. Using file names only prevented files with identical names but different folders from being updated properly (e.g. index.html).
- WARNING - this fix requires re-indexing to update the hashes
v0.4.6
Feature - an ability to update embedding from the webapp (thanks @Hisma for suggestions and testing)
v0.4.5
- Feature - an ability to update embedding with changed (or new) documents, instead of reindexing from scratch. Using this feature requires reindexing the documents once using the new version.
v0.4.3
- Fix a bug in the webapp - when trying to switch to a new config, the old config is reloaded.
v0.4.2
- Implement feature #58 - an ability to switch between the configs in the web app (thanks @Hisma for suggestions and testing)
v0.4.1
- Feature - chat history in the web app (thanks @Hisma for suggestions and help with testing)
- Documentation update
v0.3.3
Add explicit CUDA options to fix GPU support for llama-cpp models
v0.3.2
- Fix bug that prevented finding documents using Python 3.11 based virtual environments.
v0.3.1
- Add progress indicators for embeddings generation
- Add support for Azure OpenAI
- Swith Google Colab notebook demo to use GGUF instead of GGML (which is retired)