You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Proposal Summary:
Currently, Opik tracking allows for monitoring the number of tokens and associated costs when using standard LLM calls via track_openai. However, in my project, OpenAI LLM calls are made through an HTTP POST request instead of directly using the OpenAI API. As a result, the automatic tracking of #tokens and costs is not available in the UI. This feature request is to explore options to enable automatic token and cost tracking for HTTP POST-based LLM calls within the Opik UI. If this functionality already exists, I would appreciate guidance on implementing it; if not, I kindly request this feature be considered for development.
Code Example for how I call the LLM:
async with session.post(
url, headers=system_info.headers, data=payload, params=params, timeout=300
) as response:
response_data = await response.read()
response_json = json.loads(response_data)
Motivation
Motivation:
Problem Statement: The current limitation prevents teams using HTTP POST requests for LLM calls from benefiting from automatic token and cost tracking in the Opik UI.
Current Workaround: Manually calculating tokens and costs using external tools or custom scripts, which is error-prone and inefficient.
Benefits: This feature would enhance transparency and monitoring capabilities, streamline workflows, and align HTTP POST-based LLM integrations with the standard API integrations in terms of analytics and reporting.
Thank you for considering this request. Please let me know if you need any additional information or if I can assist further in providing context or testing potential solutions.
The text was updated successfully, but these errors were encountered:
This is actually possible today ! Let me update the docs and get back to you, essentially you need to specify the model and provider fields when logging a span
Proposal summary
Proposal Summary:
Currently, Opik tracking allows for monitoring the number of tokens and associated costs when using standard LLM calls via track_openai. However, in my project, OpenAI LLM calls are made through an HTTP POST request instead of directly using the OpenAI API. As a result, the automatic tracking of #tokens and costs is not available in the UI. This feature request is to explore options to enable automatic token and cost tracking for HTTP POST-based LLM calls within the Opik UI. If this functionality already exists, I would appreciate guidance on implementing it; if not, I kindly request this feature be considered for development.
Code Example for how I call the LLM:
async with session.post(
url, headers=system_info.headers, data=payload, params=params, timeout=300
) as response:
response_data = await response.read()
response_json = json.loads(response_data)
Motivation
Motivation:
Problem Statement: The current limitation prevents teams using HTTP POST requests for LLM calls from benefiting from automatic token and cost tracking in the Opik UI.
Current Workaround: Manually calculating tokens and costs using external tools or custom scripts, which is error-prone and inefficient.
Benefits: This feature would enhance transparency and monitoring capabilities, streamline workflows, and align HTTP POST-based LLM integrations with the standard API integrations in terms of analytics and reporting.
Thank you for considering this request. Please let me know if you need any additional information or if I can assist further in providing context or testing potential solutions.
The text was updated successfully, but these errors were encountered: