-
-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Record actual model used to run the prompt #34
Comments
Current schema:
|
There's other data about individual runs that I'm interested in storing. For non-streaming responses from OpenAI I get back this:
I don't think I get the "usage" block for streaming responses, which is annoying. |
I have another feature in the pipeline that will use a different model from the requested one: That may want to store "user requested 'auto' but we ran I think in that case I don't actually care that they said "auto". |
I'm going to add a |
@migration
def m005_debug(db):
db["log"].add_column("debug", str)
db["log"].add_column("duration_ms", int) |
Example output: % llm logs
[
{
"id": 435,
"model": "gpt-3.5-turbo",
"timestamp": "2023-06-16 07:46:45.781006",
"prompt": "say one duration",
"system": null,
"response": "1 hour",
"chat_id": null,
"debug": "{\"model\": \"gpt-3.5-turbo-0301\"}",
"duration_ms": 820
},
{
"id": 434,
"model": "gpt-3.5-turbo",
"timestamp": "2023-06-16 07:46:42.106479",
"prompt": "say one duration",
"system": null,
"response": "One hour.",
"chat_id": null,
"debug": "{\"model\": \"gpt-3.5-turbo-0301\", \"usage\": {\"prompt_tokens\": 11, \"completion_tokens\": 3, \"total_tokens\": 14}}",
"duration_ms": 1364
}, |
Updated schema: https://llm.datasette.io/en/latest/logging.html#sql-schema |
Right now I'm just recording the model that was requested, e.g.
gpt-3.5-turbo
in themodel
column.But... it turns out the response from OpenAI includes this -
"model": "gpt-3.5-turbo-0301"
- and there are meaningful differences between those model versions, e.g. the latest isgpt-3.5-turbo-0613
but you have to opt into it.I'd like to record the model that was actually used. Not sure how best to put this in the schema though, since it may only make sense for OpenAI models.
The text was updated successfully, but these errors were encountered: