Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let the tool decide if the chain should be called again with the return value #168

Closed
OskarStark opened this issue Dec 20, 2024 · 4 comments · Fixed by #187
Closed

Let the tool decide if the chain should be called again with the return value #168

OskarStark opened this issue Dec 20, 2024 · 4 comments · Fixed by #187

Comments

@OskarStark
Copy link
Contributor

OskarStark commented Dec 20, 2024

In my case I craft a dedicated response like:

{
  "foo": "bar",
}

and I don't want to feed it back to the LLM for another roundtrip.

That happens here (in Line 55):

while ($output->response instanceof ToolCallResponse) {
$toolCalls = $output->response->getContent();
$messages[] = Message::ofAssistant(toolCalls: $toolCalls);
foreach ($toolCalls as $toolCall) {
$result = $this->toolBox->execute($toolCall);
$messages[] = Message::ofToolCall($toolCall, $result);
}
$output->response = $this->chain->call($messages, $output->options);

I think we should not make it configurable like "Tool A will always feed back to the API", because it can depend on runtime. Maybe a ToolResponse class as an additional allowed return type?

Dummy-code

new class ToolResponse {
    value: string, float etc.
    callLlm: yes/no
}
@OskarStark
Copy link
Contributor Author

A ToolResponse would couple the tool to the LlmChain bundle/lib. So what about adding more info to the #[AsTool] like for example:

- #[AsTool(name: 'foo', description: 'returns foo')]
+ #[AsTool(name: 'foo', description: 'returns foo', directReturn: true)] 

naming to be discussed

@chr-hertel
Copy link
Member

This also conflicts with the possibility of having multiple tool calls. so we would need to require the combination of this setting active with the option parallel_tool_calls: false.
see https://platform.openai.com/docs/guides/function-calling#parallel-function-calling-and-structured-outputs

on the other hand i'd be more in favor of thinking of a generalized extension point here for intercepting the chain with custom logic. event dispatcher comes to my mind immediatley.

@OskarStark
Copy link
Contributor Author

As discussed we can go with this option and implement something with event dispatcher as a follow up

@chr-hertel
Copy link
Member

okay, not as easy as i thought :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants