Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: replacing openai llms and lang chain functions in langchain setup by using ollama llms and llama_index or ollama functions #17684

Open
1 task done
bhanu1106 opened this issue Jan 31, 2025 · 4 comments
Labels
question Further information is requested

Comments

@bhanu1106
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Hi , I am using langchain setup , I need to replace openi ai api keys with ollama llms and also llm.bind_tools(tools) functionality using llama_index or ollama functions if needed.

def initialize_agent(entity_id: str, user_email: str, memory: ConversationBufferMemory):

llm = ChatOpenAI(
    model="gpt-4-turbo",
    temperature=0,
openai_api_key=OPENAI_API_KEY
)

ticket_system = TicketSystem(entity_id, user_email)


tools = [
   StructuredTool.from_function(
        name="create_ticket",
        func=ticket_system.create_ticket, 
        description="""Create a new ticket after collecting all required information.
        
        Required parameters (must be collected through conversation):
        - title: Title of the ticket
        - message: Detailed description of the issue/request
        - category: Ticket category
        - category_id: Category identifier
        - requestor_name: Name of the requester
        - requested_by: Email of the requester
        - assigned_to: Email of the assignee
        - assignee_name: Name of the assignee
        - status: Initial status of the ticket (e.g., 'Assigned')
        - assigned_by: Email of the assigner
        - assigner_name: Name of the assigner
        - action: Initial action of the ticket (e.g., 'Opened')
        - sub_category: Sub-category of the ticket
        - sub_category_id: Sub-category identifier""",
    ),
    Tool(
        name="get_assigned_tickets",
        func=ticket_system._get_assigned_tickets,
        description="Use this tool to get  information about assigned tasks. Optional params: page (default=1), limit (default=10), status."
    ),
    Tool(
        name="get_requested_tickets",
        func=ticket_system._get_requested_tickets,
        description="Use this tool to get information about requested. Optional params: page (default=1), limit (default=10), status."
    ),
    Tool(
        name="get_ticket_details",
        func=ticket_system._get_ticket_details,
        description="Use this tool to view detailed information about specific ticket id. Optional params: ticket_id"
    )
]

prompt = ChatPromptTemplate.from_messages([
    ("system", SYSTEM_PROMPT),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

llm_with_tools = llm.bind_tools(tools)

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
        "chat_history": lambda x: memory.load_memory_variables({})["chat_history"],
    }
    | prompt
    | llm_with_tools
    | OpenAIToolsAgentOutputParser()
) 

return AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=False,
    memory=memory
)
@bhanu1106 bhanu1106 added the question Further information is requested label Jan 31, 2025
Copy link

dosubot bot commented Jan 31, 2025

To replace OpenAI LLMs and Langchain functions with Ollama LLMs and LlamaIndex, follow these steps:

  1. Install the Ollama Integration:
    First, ensure you have the necessary package for Ollama integration:

    pip install llama-index-llms-ollama
  2. Modify Imports:
    Replace the OpenAI import with Ollama:

    from llama_index.llms.ollama import Ollama
  3. Initialize Ollama LLM:
    Replace the ChatOpenAI initialization with Ollama:

    llm = Ollama(model="mixtral:8x7b", request_timeout=120.0)
  4. Tool Binding:
    The llm.bind_tools(tools) functionality might not have a direct equivalent in Ollama or LlamaIndex. You may need to manually handle tool integration. Check the available methods in LlamaIndex for integrating custom tools, such as chat, stream_chat, achat, astream_chat, acomplete, and astream_complete [1].

  5. Adjust the Agent Setup:
    You might need to adjust the agent setup to accommodate the changes in LLM and tool integration. This could involve modifying how tools are used within the agent and ensuring compatibility with the new LLM setup.

By following these steps, you should be able to transition from using OpenAI LLMs to Ollama LLMs within your Langchain setup.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@bhanu1106
Copy link
Author

can you give me the custom code for bind_tools functionality , I can provide u extra code if needed

Copy link

dosubot bot commented Jan 31, 2025

The bind_tools functionality in Langchain is used to integrate tools with the language model. Since there isn't a direct equivalent in the Ollama or LlamaIndex setup, you can create a custom implementation to achieve similar functionality. Here's a basic example of how you might implement a custom bind_tools function:

class CustomLLMWithTools:
    def __init__(self, llm, tools):
        self.llm = llm
        self.tools = {tool.name: tool for tool in tools}

    def chat_with_tools(self, user_msg):
        # Process the user message and determine if a tool should be used
        response = self.llm.chat(user_msg)
        tool_calls = self.extract_tool_calls(response)

        # Execute tool calls and integrate results
        for tool_call in tool_calls:
            tool = self.tools.get(tool_call['tool_name'])
            if tool:
                tool_result = tool(**tool_call['tool_kwargs'])
                # Integrate tool result into the response
                response = self.integrate_tool_result(response, tool_result)

        return response

    def extract_tool_calls(self, response):
        # Custom logic to extract tool calls from the LLM response
        # This is a placeholder and should be implemented based on your needs
        return []

    def integrate_tool_result(self, response, tool_result):
        # Custom logic to integrate tool results into the LLM response
        # This is a placeholder and should be implemented based on your needs
        return response

# Example usage
llm = Ollama(model="mixtral:8x7b", request_timeout=120.0)
tools = [
    # Define your tools here
]
llm_with_tools = CustomLLMWithTools(llm, tools)
response = llm_with_tools.chat_with_tools("Hello!")

This example provides a basic structure for integrating tools with an LLM. You'll need to implement the extract_tool_calls and integrate_tool_result methods based on how you want to handle tool calls and integrate their results into the LLM's responses. Adjust the logic to fit your specific requirements and the capabilities of the tools you're using.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@bhanu1106
Copy link
Author

can you write that custom logic and integrate that in the code below :

import os
from typing import Optional, Dict, Any, Union
from langchain_openai import ChatOpenAI
from langchain_ollama import ChatOllama
from langchain.tools import Tool
from langchain.agents.format_scratchpad.openai_tools import format_to_openai_tool_messages
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.memory import ConversationBufferMemory
from langchain.agents import AgentExecutor
from requests_toolbelt.multipart.encoder import MultipartEncoder
import requests
from dotenv import load_dotenv
import json
from langchain.tools.base import StructuredTool
import warnings
warnings.filterwarnings('ignore')

Load environment variables

load_dotenv()

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
assert OPENAI_API_KEY, "OPENAI_API_KEY environment variable not set."

API Endpoints

BASE_URL = "https://abc-gcp-dev/inventory-task/v1"

class ApiEndpoints:
CREATE_TICKET = f"{BASE_URL}/create_task"
GET_ASSIGNED_TICKETS = f"{BASE_URL}/get_assigned_tickets"
GET_REQUESTED_TICKETS = f"{BASE_URL}/get_requested_tickets"
GET_TICKET_DETAILS = f"{BASE_URL}/get_ticket_details"

class ApiResponse:
def init(self, status_code: int, data: Any):
self.status_code = status_code
self.data = data

def is_success(self) -> bool:
    return 200 <= self.status_code < 300

def to_dict(self) -> Dict[str, Any]:
    if isinstance(self.data, str):
        try:
            return json.loads(self.data)
        except json.JSONDecodeError:
            return {"message": self.data}
    return self.data if isinstance(self.data, dict) else {"data": self.data}

def __str__(self) -> str:
    if self.is_success():
        return json.dumps(self.to_dict(), indent=2)
    return f"Error: {self.data}"

class TicketSystem:
def init(self, entity_id: str, user_email: str):
self.entity_id = entity_id
self.user_email = user_email
self.headers = {"Authorization": os.getenv("Authorization")}

def make_request(self, method: str, url: str, headers=None, **kwargs) -> ApiResponse:
    try:
        if headers is None:
            headers = self.headers
        #print("Request headers:", headers)
        #print("API Calling:", url)
        response = requests.request(method, url, headers=headers, **kwargs)
        return ApiResponse(response.status_code, response.json() if response.text else {})
    except requests.RequestException as e:
        return ApiResponse(500, str(e))

def create_ticket(self, title, message=None, category=None, category_id=None, 
              requestor_name=None, assigned_to=None, assignee_name=None, 
              status="Assigned", assigned_by=None, assigner_name=None, 
              action="Opened", sub_category=None, sub_category_id=None, 
              files=None) -> str:
# Check for required fields
    required_fields = {
        'title': title,
        'message': message,
        'category': category,
        'category_id': category_id,
        'requestor_name': requestor_name,
        'assigned_to': assigned_to,
        'assignee_name': assignee_name,
        'status': status,
        'assigned_by': assigned_by,
        'assigner_name': assigner_name,
        'action': action,
        'sub_category': sub_category,
        'sub_category_id': sub_category_id
    }

    missing_fields = [field for field, value in required_fields.items() if value is None]
    if missing_fields:
        raise ValueError(f"Missing required fields: {', '.join(missing_fields)}")

    # Parameters to send with the request
    params = {
        "title": title,
        "entity_id": self.entity_id,
        "requested_by": self.user_email,
        "requestor_name": requestor_name,
        "category": category,
        "category_id": category_id,
        "message": message,
        "status": status,
        "assigned_to": assigned_to,
        "assignee_name": assignee_name,
        "assigned_by": assigned_by or self.user_email,
        "assigner_name": assigner_name,
        "action": action,
        "sub_category": sub_category,
        "sub_category_id": sub_category_id
    }

    # Prepare fields for MultipartEncoder
    fields = params.copy()
    if files:
        for i, file_path in enumerate(files):
            fields[f"user_files[{i}]"] = (os.path.basename(file_path), open(file_path, "rb"))

    # Create MultipartEncoder
    multipart_data = MultipartEncoder(fields=fields)
    headers = {**self.headers, 'Content-Type': multipart_data.content_type}
    data = multipart_data

    # Make the request
    response = self.make_request(
        "POST",
        ApiEndpoints.CREATE_TICKET,
        data=data,
        headers=headers
    )

    # Close files after uploading
    if files:
        for key, value in fields.items():
            if key.startswith("user_files"):
                value[1].close()

    return str(response)


def _get_assigned_tickets(self, page: Union[int, str] = 1, limit: Union[int, str] = 10, status: Optional[str] = None) -> str:
    # Convert parameters to strings and ensure they're valid
    try:
        page = str(int(page))  # Validates that page is a valid integer
        limit = str(int(limit))  # Validates that limit is a valid integer
    except (ValueError, TypeError):
        page = "1"
        limit = "10"

    params = {
        "page": page,
        "limit": limit,
        "assignee_id": self.user_email,
        "entity_id": self.entity_id
    }
    
    if status:
        params["status"] = status

    response = self.make_request(
        "POST",
        ApiEndpoints.GET_ASSIGNED_TICKETS,
        data=params  # Using data instead of json for form-encoded data
    )
    return str(response)

def _get_requested_tickets(self, page: Union[int, str] = 1, limit: Union[int, str] = 10, status: Optional[str] = None) -> str:
    try:
        page = str(int(page))
        limit = str(int(limit))
    except (ValueError, TypeError):
        page = "1"
        limit = "10"

    params = {
        "page": page,
        "limit": limit,
        "requested_by": self.user_email,
        "entity_id": self.entity_id
    }
    
    if status:
        params["status"] = status

    response = self.make_request(
        "POST",
        ApiEndpoints.GET_REQUESTED_TICKETS,
        data=params
    )
    return str(response)

def _get_ticket_details(self, ticket_id: Optional[str] = None) -> str:
    #print("APi calling=get_ticket_details")
    params = {
        "entity_id": self.entity_id
    }

    if ticket_id:
        params["id"] = str(int(ticket_id))
    
    #print("params:", params)
    response = self.make_request(
        "POST",
        ApiEndpoints.GET_TICKET_DETAILS,
        data=params
    )
    return str(response)

SYSTEM_PROMPT = """
You are a highly intelligent and context-aware assistant specializing in ticket management. You maintain context between user queries to ensure seamless interaction.

Operations

  1. create_ticket:

    • Required Information: Always collect the following details through guided conversation:
      • title: Request a clear and descriptive title for the ticket.
      • message: Obtain a detailed description of the issue or request.
      • category: Ask for the ticket category.
      • category_id: Assist in identifying the category ID if unknown.
      • requestor_name: Get the full name of the requester.
      • requested_by: Automatically use the user's email as the requester.
      • assigned_to: Collect the email address of the assignee.
      • assignee_name: Get the name of the assignee.
      • status: Set the ticket's initial status (e.g., "Assigned").
      • assigned_by: Automatically use the user's email as the assigner.
      • assigner_name: Use the user's name as the assigner.
      • action: Specify the ticket's initial action (e.g., "Opened").
      • sub_category: Ask for the ticket's sub-category.
      • sub_category_id: Help identify the sub-category ID if unknown.
    • Validate all inputs to ensure accuracy and completeness before proceeding.
  2. get_assigned_tickets:

    • Retrieve tasks assigned to the user.
    • Dynamically ask the user if they would like to:
      • Filter by specific status (e.g., "Open", "Closed").
      • Get details of a specific ticket.
    • Maintain context from previous queries. For example:
      • If the user asks, "List assigned tasks" and follows up with "List closed tasks," assume they mean "Closed assigned tasks."
      • If the user switches focus (e.g., "List requested tasks"), reset context to requested tickets.
    • Support pagination (default: page=1, limit=10).
    • Clearly format results for easy readability.
  3. get_requested_tickets:

    • Retrieve tasks requested by the user.
    • Dynamically ask the user if they would like to:
      • Filter by specific status (e.g., "Open", "Closed").
      • View details of a specific ticket.
    • Maintain context from previous queries. For example:
      • If the user asks, "List requested tasks" and follows up with "Show open tasks," assume they mean "Open requested tasks."
    • Support pagination (default: page=1, limit=10).
    • Clearly format results for easy readability.
  4. get_ticket_details:

    • Retrieve detailed information about a specific ticket using its Ticket ID.
    • If the Ticket ID is not provided:
      • Ask the user, "Would you like me to retrieve all ticket IDs for you?"
      • Present a list of ticket IDs to help the user.
    • Once the ID is identified, provide detailed ticket information.
    • Ensure the response is clear and well-structured.

Context Management:

  • Track user queries to infer context dynamically. Example scenarios:
    • If the user asks "List tasks assigned to me" followed by "Show open tasks," filter the assigned tasks for those with status "Open."
    • If a follow-up query is ambiguous (e.g., "Show closed tasks"), prioritize context from the last related query (e.g., assigned or requested tickets).
    • Reset context if the user explicitly switches focus to another type of task (e.g., "Show requested tasks").
  • Confirm with the user if context is unclear (e.g., "Did you mean open tasks from your assigned tickets?").

General Guidelines:

  • Validation: Always validate user inputs before proceeding with any operation.
  • Automation: Automatically use the user's entity_id and email for operations where applicable.
  • Error Handling: Provide clear error messages and guidance in case of issues.
  • Response Formatting: Ensure all responses are well-organized, concise, and easy to read.
  • Dynamic Filtering: Support filtering based on user-specified criteria such as ticket status (e.g., "Open", "Closed", "Assigned").
  • Intelligent Query Parsing: Understand user queries and map them to the appropriate filters or actions based on ticket attributes.

Example Interactions:

  1. User Query: "List tasks assigned to me."

    • Response: "Here are your assigned tasks (Page 1 of 10). Would you like to filter by status or view details of a specific ticket?"
  2. Follow-up Query: "Show open tasks."

    • Response: "Filtering your assigned tasks for status 'Open'. Here are the results."
  3. Switching Focus: "List requested tasks."

    • Response: "Here are your requested tasks (Page 1 of 10). Would you like to filter by status or view details of a specific ticket?"
  4. Ambiguous Follow-up: "Show closed tasks."

    • Response: "Are you referring to closed tasks from your assigned or requested tickets?"

This design ensures seamless, context-aware interactions and a user-friendly experience.
"""

def initialize_agent(entity_id: str, user_email: str, memory: ConversationBufferMemory):

#ChatOpenAI
#openai_api_key=OPENAI_API_KEY
llm = ChatOpenAI(
    model="gpt-4-turbo",
    temperature=0,
openai_api_key=OPENAI_API_KEY
)

ticket_system = TicketSystem(entity_id, user_email)


tools = [
   StructuredTool.from_function(
        name="create_ticket",
        func=ticket_system.create_ticket, 
        description="""Create a new ticket after collecting all required information.
        
        Required parameters (must be collected through conversation):
        - title: Title of the ticket
        - message: Detailed description of the issue/request
        - category: Ticket category
        - category_id: Category identifier
        - requestor_name: Name of the requester
        - requested_by: Email of the requester
        - assigned_to: Email of the assignee
        - assignee_name: Name of the assignee
        - status: Initial status of the ticket (e.g., 'Assigned')
        - assigned_by: Email of the assigner
        - assigner_name: Name of the assigner
        - action: Initial action of the ticket (e.g., 'Opened')
        - sub_category: Sub-category of the ticket
        - sub_category_id: Sub-category identifier""",
    ),
    Tool(
        name="get_assigned_tickets",
        func=ticket_system._get_assigned_tickets,
        description="Use this tool to get  information about assigned tasks. Optional params: page (default=1), limit (default=10), status."
    ),
    Tool(
        name="get_requested_tickets",
        func=ticket_system._get_requested_tickets,
        description="Use this tool to get information about requested. Optional params: page (default=1), limit (default=10), status."
    ),
    Tool(
        name="get_ticket_details",
        func=ticket_system._get_ticket_details,
        description="Use this tool to view detailed information about specific ticket id. Optional params: ticket_id"
    )
]

prompt = ChatPromptTemplate.from_messages([
    ("system", SYSTEM_PROMPT),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

llm_with_tools = llm.bind_tools(tools)

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
        "chat_history": lambda x: memory.load_memory_variables({})["chat_history"],
    }
    | prompt
    | llm_with_tools
    | OpenAIToolsAgentOutputParser()
) 

return AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=False,
    memory=memory
)

class UserMemory:
def init(self):
self.memory_store = {}

def get_memory(self, user_id: str) -> ConversationBufferMemory:
    if user_id not in self.memory_store:
        self.memory_store[user_id] = ConversationBufferMemory(
            memory_key="chat_history",
            return_messages=True
        )
    return self.memory_store[user_id]

def main():
entity_id = "2c376b86-ec69-43c6-801aghyp999"
user_email = "[email protected]"

memory = UserMemory().get_memory(user_email)
agent_executor = initialize_agent(entity_id, user_email, memory)

print("Ticket Management System (type 'exit' to quit)")
print("-" * 50)

while True:
    try:
        user_query = input("You: ").strip()
        if user_query.lower() in ["exit", "quit"]:
            print("Thank you for using the Ticket Management System!")
            break
            
        response = agent_executor.invoke({"input": user_query})
        print(f"\nAgent: {response['output']}\n")
        
    except Exception as e:
        print(f"\nError: {str(e)}\nPlease try again or contact support if the issue persists.\n")

if name == "main":
main()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant