Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support HuggingFace's inference API #352

Merged
merged 2 commits into from
Sep 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion prompt2model/prompt_parser/instr_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ def parse_from_prompt(self, prompt: str) -> None:
response: openai.ChatCompletion | Exception = (
chat_api.generate_one_completion(
parsing_prompt_for_chatgpt,
temperature=0,
temperature=0.01,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this solving a problem relating to huggingface generation needing to have >0 temp? Or is it added so that the retries have a chance of working out in case it fails initially?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this solves the problem that huggingface generation requires a positive temperature

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can handle this as part of the litellm defaults? thoughts @saum7800 @viswavi

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can handle this as part of the litellm defaults? thoughts @saum7800 @viswavi

Can you elaborate on what you mean? We use different temperatures in different places in Prompt2Model so we would prefer not to use LiteLLM's default values (in case we want to set it to something specific).

But I think that preventing temperature of 0 for LiteLLM (or bumping 0 to 0.0001) is a good idea, since a temperature of 0 is valid for OpenAI's models.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gotcha. Since we only pass the temperature when user sets it, i guess this is a non-issue on our end.

Thanks for the feedback!

presence_penalty=0,
frequency_penalty=0,
)
Expand Down
5 changes: 5 additions & 0 deletions prompt2model/utils/api_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,16 +45,19 @@ def __init__(
self,
model_name: str = "gpt-3.5-turbo",
max_tokens: int | None = None,
api_base: str | None = None,
):
"""Initialize APIAgent with model_name and max_tokens.

Args:
model_name: Name fo the model to use (by default, gpt-3.5-turbo).
max_tokens: The maximum number of tokens to generate. Defaults to the max
value for the model if available through litellm.
api_base: Custom endpoint for Hugging Face's inference API.
"""
self.model_name = model_name
self.max_tokens = max_tokens
self.api_base = api_base
if max_tokens is None:
try:
self.max_tokens = litellm.utils.get_max_tokens(model_name)
Expand Down Expand Up @@ -99,6 +102,7 @@ def generate_one_completion(
messages=[
{"role": "user", "content": f"{prompt}"},
],
api_base=self.api_base,
temperature=temperature,
presence_penalty=presence_penalty,
frequency_penalty=frequency_penalty,
Expand Down Expand Up @@ -144,6 +148,7 @@ async def _throttled_completion_acreate(
return await acompletion(
model=model,
messages=messages,
api_base=self.api_base,
temperature=temperature,
max_tokens=max_tokens,
n=n,
Expand Down