You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would then be beneficial to consider how some LLMs could be used as detectors in a workflow with LLM generation, whether on individual chat completion messages [input to chat completion] or choice messages [output of chat completions] or just on text generation input/output.
Describe the solution you'd like
Most of the current adapter-supported model classes like Granite Guardian or Llama Guard take in chat history. We want to firstly explore how these model classes can be used to analyze general text generation input/output.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The contents endpoint of the detector API has generally been a quick way for users to quickly try out detectors. This detectors endpoint currently is integrated with text generation, allowing for detection on unary text generation or streaming text generation through the orchestrator and is in-progress of being integrated with chat completions through this orchestrator endpoint.
It would then be beneficial to consider how some LLMs could be used as detectors in a workflow with LLM generation, whether on individual chat completion messages [input to chat completion] or choice messages [output of chat completions] or just on text generation input/output.
Describe the solution you'd like
Most of the current adapter-supported model classes like Granite Guardian or Llama Guard take in chat history. We want to firstly explore how these model classes can be used to analyze general text generation input/output.
The text was updated successfully, but these errors were encountered: