Skip to content

Commit

Permalink
Add warning comments referring to unimplemented functionality
Browse files Browse the repository at this point in the history
  • Loading branch information
vmpuri committed Jul 31, 2024
1 parent a3bf37d commit 6303c8c
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 0 deletions.
11 changes: 11 additions & 0 deletions api/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,17 @@ def __init__(self, *args, **kwargs):
def completion(self, completion_request: CompletionRequest):
"""Handle a chat completion request and yield a chunked response.
** Warning ** : Not all arguments of the CompletionRequest are consumed as the server isn't completely implemented.
Current treatment of parameters is described below.
- messages: The server consumes the final element of the array as the prompt.
- model: This has no impact on the server state, i.e. changing the model in the request
will not change which model is responding. Instead, use the --model flag to seelect the model when starting the server.
- temperature: This is used to control the randomness of the response. The server will use the temperature
See https://github.com/pytorch/torchchat/issues/973 for more details.
Args:
completion_request: Request object with prompt and other parameters.
Expand Down
5 changes: 5 additions & 0 deletions server.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ def chat_endpoint():
"""
Endpoint for the Chat API. This endpoint is used to generate a response to a user prompt.
This endpoint emulates the behavior of the OpenAI Chat API. (https://platform.openai.com/docs/api-reference/chat)
** Warning ** : Not all arguments of the CompletionRequest are consumed.
See https://github.com/pytorch/torchchat/issues/973 and the OpenAiApiGenerator class for more details.
"""
data = request.get_json()

Expand Down

0 comments on commit 6303c8c

Please sign in to comment.