-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Added support for Claude 3+ Chat API in Bedrock #2870
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the PR @RyanKadri! A few tiny suggestions but you also have merge conflicts so I didn't do a thorough review yet.
const chatCompletionMessage = new LlmChatCompletionMessage({ | ||
agent, | ||
segment, | ||
bedrockCommand, | ||
bedrockResponse, | ||
isResponse: true, | ||
index: index + 1, | ||
index: index, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
index: index, | |
index, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a little confusing. what about splitting vars so the prompt index can be promptIndex
and then in here you have completionIndex
, then this index is promptIndex
+ completionIndex
content: contextMessage.content, | ||
role: contextMessage.role, | ||
bedrockResponse, | ||
index: index, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
index: index, | |
index, |
please run |
Sounds good 👍. I have a couple other changes incoming for another unit test scenario I want to cover |
23c37dc
to
c31dace
Compare
(Looking into unit tests) |
Description
The current version of the Bedrock instrumentation is mostly geared toward supporting the Claude Text Completions API but may not split messages as expected in multi-turn conversations with the Messages API.
The Text Completions API supports a single prompt message but the Messages API, lets you provide multiple distinct messages (context and prompt) to the model. Currently the agent combines those separate context messages into a single prompt but it probably should follow the pattern we use in OpenAI instrumentation of splitting them into multiple
LlmChatCompletionMessage
events (still oneLlmChatCompletionSummary
per API call)To make things more complicated, the Messages API also allows messages to have sub-chunks to support multi-modal use-cases (Anthropic docs for how this works). For example, one conceptual multi-modal message might be
What is this animal <picture of an elephant> and where does it live?
. For these cases, if a single message has multiple chunks, this PR attempts to join those together into one string representation with reasonable placeholders for non-text components. Further down the road, the AI Monitoring team needs to do a bit of work to see if we can represent those non-text chunks as placeholders in a string. For now though, we just want something to show.How to Test
The AI Monitoring sample app is running this version of the instrumentation for Anthropic models. The AI Monitoring team can help you out with where to see the data it produces
Related Issues
N/A