-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: replacing openai llms and lang chain functions in langchain setup by using ollama llms and llama_index or ollama functions #17684
Comments
To replace OpenAI LLMs and Langchain functions with Ollama LLMs and LlamaIndex, follow these steps:
By following these steps, you should be able to transition from using OpenAI LLMs to Ollama LLMs within your Langchain setup. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
can you give me the custom code for bind_tools functionality , I can provide u extra code if needed |
The class CustomLLMWithTools:
def __init__(self, llm, tools):
self.llm = llm
self.tools = {tool.name: tool for tool in tools}
def chat_with_tools(self, user_msg):
# Process the user message and determine if a tool should be used
response = self.llm.chat(user_msg)
tool_calls = self.extract_tool_calls(response)
# Execute tool calls and integrate results
for tool_call in tool_calls:
tool = self.tools.get(tool_call['tool_name'])
if tool:
tool_result = tool(**tool_call['tool_kwargs'])
# Integrate tool result into the response
response = self.integrate_tool_result(response, tool_result)
return response
def extract_tool_calls(self, response):
# Custom logic to extract tool calls from the LLM response
# This is a placeholder and should be implemented based on your needs
return []
def integrate_tool_result(self, response, tool_result):
# Custom logic to integrate tool results into the LLM response
# This is a placeholder and should be implemented based on your needs
return response
# Example usage
llm = Ollama(model="mixtral:8x7b", request_timeout=120.0)
tools = [
# Define your tools here
]
llm_with_tools = CustomLLMWithTools(llm, tools)
response = llm_with_tools.chat_with_tools("Hello!") This example provides a basic structure for integrating tools with an LLM. You'll need to implement the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
can you write that custom logic and integrate that in the code below : import os Load environment variablesload_dotenv() OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") API EndpointsBASE_URL = "https://abc-gcp-dev/inventory-task/v1" class ApiEndpoints: class ApiResponse:
class TicketSystem:
SYSTEM_PROMPT = """ Operations
Context Management:
General Guidelines:
Example Interactions:
This design ensures seamless, context-aware interactions and a user-friendly experience. def initialize_agent(entity_id: str, user_email: str, memory: ConversationBufferMemory):
class UserMemory:
def main():
if name == "main": |
Question Validation
Question
Hi , I am using langchain setup , I need to replace openi ai api keys with ollama llms and also llm.bind_tools(tools) functionality using llama_index or ollama functions if needed.
def initialize_agent(entity_id: str, user_email: str, memory: ConversationBufferMemory):
The text was updated successfully, but these errors were encountered: