-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support HuggingFace's inference API #352
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left one comment to understand one change. Looks good to me otherwise
@@ -93,7 +93,7 @@ def parse_from_prompt(self, prompt: str) -> None: | |||
response: openai.ChatCompletion | Exception = ( | |||
chat_api.generate_one_completion( | |||
parsing_prompt_for_chatgpt, | |||
temperature=0, | |||
temperature=0.01, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this solving a problem relating to huggingface generation needing to have >0 temp? Or is it added so that the retries have a chance of working out in case it fails initially?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this solves the problem that huggingface generation requires a positive temperature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can handle this as part of the litellm defaults? thoughts @saum7800 @viswavi
Can you elaborate on what you mean? We use different temperatures in different places in Prompt2Model so we would prefer not to use LiteLLM's default values (in case we want to set it to something specific).
But I think that preventing temperature of 0 for LiteLLM (or bumping 0 to 0.0001) is a good idea, since a temperature of 0 is valid for OpenAI's models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gotcha. Since we only pass the temperature when user sets it, i guess this is a non-issue on our end.
Thanks for the feedback!
Description
To support HuggingFace's inference API, we need to expose the "api_base" parameter for LiteLLM. This PR exposes that parameter to the APIAgent.
References
https://docs.litellm.ai/docs/providers/huggingface
Blocked by
N/A