Skip to content

A sophisticated chatbot application that helps users navigate and understand LangChain documentation through an interactive, user-friendly interface.

License

Notifications You must be signed in to change notification settings

junfanz1/LLM-Documentation-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Documentation Chatbot 🤖

A sophisticated chatbot application that helps users navigate and understand LangChain documentation through an interactive, user-friendly interface.

image

Contents

1. Project Purpose

This project aims to create a Documentation Helper Bot that utilizes Large Language Model (LLM) technology to help users quickly retrieve and understand document content. Users can ask questions, and the bot will retrieve relevant information from a specified document library and generate easy-to-understand answers. Example inputs are LangChain documents, which can also be scalable to any kind of documents in your personal or working environment.

2. Input and Output

  • Input: Natural language questions asked by users.
  • Output: Relevant information retrieved from documents, concise answers generated by LLM, and source links to related documents.

3. LLM Technology Stack

  • LangChain: Framework for building LLM applications, including document loading, text splitting, vector storage, retrieval, and question-answering chains.
  • OpenAI: Used for text embeddings (text-embedding-3-small) and chat models (ChatOpenAI).
  • Pinecone: Vector database for storing document embeddings.
  • Streamlit: Used to build a user-friendly web interface.
  • FireCrawlLoader: Used to crawl the data from websites.

4. Challenges and Difficulties

  • Document Loading and Splitting: Handling documents of different formats and structures to ensure text is correctly split into meaningful chunks.
  • Vector Database Indexing and Retrieval: Optimizing vector database performance to improve retrieval accuracy and speed.
  • LLM Response Quality: Ensuring that LLM-generated answers are accurate, concise, easy to understand, and provide reliable sources.
  • Website Crawling Efficiency and Accuracy: How to efficiently crawl web pages and only crawl the main content.
  • User Interface Design: How to design an intuitive and easy-to-use user interface that provides a good user experience.

5. Future Business Impact and Further Improvements

  • Improve Work Efficiency: Help users quickly obtain needed information, saving time and effort.
  • Enhance Knowledge Management: Build an internal corporate knowledge base to facilitate employee information retrieval and sharing.
  • Personalized Services: Provide personalized document assistant services based on user habits and preferences.
  • Multilingual Support: Support documents and question answering in multiple languages.
  • Multimodal Support: Support documents in multiple modalities such as images, audio, and video.

6. Target Audience and Benefits

  • Developers: Quickly find API documentation and code examples.
  • Students and Researchers: Retrieve academic papers and research materials.
  • Corporate Employees: Find internal company documents and knowledge bases.
  • General Users: Obtain knowledge and information in various fields.

7. Advantages and Disadvantages

  • Advantages:
    • Quick retrieval and answer generation.
    • Provide reliable document sources.
    • User-friendly interface.
  • Disadvantages:
    • Relies on LLM performance and accuracy.
    • May not provide satisfactory answers for complex or in-depth questions.
    • When crawling websites, crawling may fail due to changes in website structure.

8. Tradeoffs

  • Accuracy and Speed: Try to improve response speed while ensuring retrieval accuracy.
  • Cost and Performance: Choose appropriate LLMs and vector databases to balance cost and performance.
  • User Experience and Functionality: While providing rich functionality, ensure the user interface is simple and easy to use.

9. Highlights and Summary

This project uses advanced technologies such as LangChain, OpenAI, and Pinecone to build an efficient Documentation Helper Bot. Through natural language question answering, users can quickly obtain relevant information from documents, improving work efficiency and knowledge acquisition capabilities.

10. Future Enhancements

  • Support more types of documents and data sources.
  • Optimize the quality and accuracy of LLM responses.
  • Enhance user interface interactivity and personalization.
  • Add user feedback and rating mechanisms.
  • Support multi-turn conversations and contextual understanding.

11. Prerequisites

  • Python 3.7+
  • OpenAI API key
  • Pinecone API key
  • Install required Python packages (see requirements.txt)

12. Setup

  1. Clone the repository:

    git clone <repository_url>
    cd <repository_directory>
  2. Install dependencies:

    pip install -r requirements.txt
  3. Set environment variables:

    Create a .env file and add OpenAI and Pinecone API keys:

    OPENAI_API_KEY=<your_openai_api_key>
    PINECONE_API_KEY=<your_pinecone_api_key>
    PINECONE_API_ENV=<your_pinecone_api_env>
    
  4. Run the ingest_docs2.py file, which will crawl the web and store the data in the pinecone database.

    python ingest_docs2.py
  5. Run the Streamlit app:

    streamlit run app.py

image

13. Code Explanation

ingest_docs.py

  • Function: Loads documents from the specified document library (ReadTheDocs), splits the text, and stores the document embeddings in the Pinecone vector database.
  • Functions:
    • ingest_docs(): Load documents, split text, and vectorize the text into the pinecone database.
  • Code:
    • Use ReadTheDocsLoader to load documents.
    • Use RecursiveCharacterTextSplitter to split text.
    • Use OpenAIEmbeddings to generate text embeddings.
    • Use PineconeVectorStore to store document embeddings.

ingest_docs2.py

  • Function: Crawl data from the specified website, split the text, and store the document embeddings in the Pinecone vector database.
  • Functions:
    • ingest_docs2(): Crawl web pages, load documents, split text, and vectorize the text into the pinecone database.
  • Code:
    • Use FireCrawlLoader to crawl the web.
    • Use RecursiveCharacterTextSplitter to split text.
    • Use OpenAIEmbeddings to generate text embeddings.
    • Use PineconeVectorStore to store document embeddings.

core.py

  • Function: Define the LLM question-answering chain, process user queries, and return answers and document sources.
  • Functions:
    • run_llm(query, chat_history): Process user queries and return answers and document sources.
  • Code:
    • Use OpenAIEmbeddings to generate query embeddings.
    • Use PineconeVectorStore to retrieve relevant documents from the vector database.
    • Use ChatOpenAI and LangChain chains to generate answers.
    • Return answers and document sources.

app.py

  • Function: Use Streamlit to build a user interface, receive user input, and display LLM-generated answers.
  • Code:
    • Use Streamlit's text_input to receive user queries.
    • Call core.run_llm to process queries.
    • Use Streamlit's write to display answers and document sources.
    • Use streamlit's session_state to save chat history.
    • Use CSS styling to beautify the interface.

14. How It Applies to the Entire Project and Each Class/Function

  • ingest_docs2.py is responsible for crawling data from websites and storing the vectorized data into the pinecone database, providing data support for the Documentation Helper Bot.
  • core.py is responsible for processing user queries, retrieving relevant documents from the vector database, and generating answers. It is the core logic of the Documentation Helper Bot.
  • app.py is responsible for building the user interface, receiving user input, and displaying LLM-generated answers. It is the user interaction part of the Documentation Helper Bot.

15. Detailed Explanation of Important Functions

  • core.run_llm(query, chat_history):
    • This function is the core of the LLM question-answering chain, responsible for processing user queries and generating answers.
    • It first uses OpenAIEmbeddings to generate query embeddings, and then uses PineconeVectorStore to retrieve relevant documents from the vector database.
    • Next, it uses LangChain's create_history_aware_retriever and create_retrieval_chain to build a question-answering chain, and uses ChatOpenAI to generate answers.
    • Finally, it returns answers and document sources.

16. Future Improvements

  • Optimize the indexing and retrieval performance of the vector database.
  • Improve the accuracy and relevance of LLM-generated answers.
  • Add user authentication and permission management.
  • Add more ways to visualize the data.
  • Improve the overall look and feel of the user interface

17. Cursor

With the polish of Cursor: image image

README generation prompt:

According to this project and all the coding files you have, generate a Github Readme for me, including: (1) purpose of the project, (2) input and output, (3) LLM Technology Stack, (4) Challenges and Difficulties, (5) Future Business Impact and Further Improvements, (6) Target Audience and Benefits, (7) Advantages and Disadvantages, (8) Tradeoffs, (9) Highlight and Summary, (10) Future Enhancements, then for the functionality to run my project, provide (11) Prerequisites, (12) Setup, (13) Code Explanation for each file and each function, (14) How it works for the whole project and each class/function, (15) Any function you think is crucial for handling the project make it detailed elaboration, (16) Future Improvements, (17) Anything else you think is important to add in this readme. Finally, generate the readme in markdown format

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Eden Marco: LangChain- Develop LLM powered applications with LangChain

About

A sophisticated chatbot application that helps users navigate and understand LangChain documentation through an interactive, user-friendly interface.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages