Skip to content

Commit

Permalink
Update config and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
DL committed Mar 29, 2024
1 parent 0f30b28 commit 219ab2e
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 1 deletion.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# pyLLMSeach - Advanced RAG, ready to use

The purpose of this package is to offer a convenient question-answering (RAG) system with a simple YAML-based configuration that enables interaction with multiple collections of local documents. Special attention is given to improvements in various components of the system **in addition to basic LLN based RAGs** - better document parsing, hybrid search, HyDE enabled search, deep linking, re-ranking, the ability to customize embeddings, and more. The package is designed to work with custom Large Language Models (LLMs) – whether from OpenAI or installed locally.
The purpose of this package is to offer a convenient question-answering (RAG) system with a simple YAML-based configuration that enables interaction with multiple collections of local documents. Special attention is given to improvements in various components of the system **in addition to basic LLN based RAGs** - better document parsing, hybrid search, HyDE enabled search, chat history, deep linking, re-ranking, the ability to customize embeddings, and more. The package is designed to work with custom Large Language Models (LLMs) – whether from OpenAI or installed locally.

## Features

Expand Down Expand Up @@ -36,6 +36,8 @@ The purpose of this package is to offer a convenient question-answering (RAG) sy
* Support for multi-querying, inspired by `RAG Fusion` - https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1
* When multi-querying is turned on (either config or webapp), the original query will be replaced by 3 variants of the same query, allowing to bridge the gap in the terminology and "offer different angles or perspectives" according to the article.

* Supprts optional chat history with question contextualization

* Allows interaction with embedded documents, internally supporting the following models and methods (including locally hosted):
* OpenAI models (ChatGPT 3.5/4 and Azure OpenAI).
* HuggingFace models.
Expand Down
2 changes: 2 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ Features
* Support for multi-querying, inspired by `RAG Fusion` - https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1
* When multi-querying is turned on (either config or webapp), the original query will be replaced by 3 variants of the same query, allowing to bridge the gap in the terminology and "offer different angles or perspectives" according to the article.

* Supprts optional chat history with question contextualization

* Allows interaction with embedded documents, internally supporting the following models and methods (including locally hosted):
* OpenAI models (ChatGPT 3.5/4 and Azure OpenAI).
* HuggingFace models.
Expand Down
3 changes: 3 additions & 0 deletions src/llmsearch/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,11 +255,14 @@ class SemanticSearchConfig(BaseModel):
"""Optional configuration for HyDE."""

multiquery: MultiQuerySettings = MultiQuerySettings()
"""Optional configuration for multi-query"""

conversation_history_settings: ConversrationHistorySettings = (
ConversrationHistorySettings(
enabled=False, max_history_length=2, rewrite_query=True
)
)
"""Conversation history"""


class LLMConfig(BaseModel):
Expand Down

0 comments on commit 219ab2e

Please sign in to comment.