-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update supported_llm.mdx #1219
base: master
Are you sure you want to change the base?
Update supported_llm.mdx #1219
Conversation
Current example of Ollama usage doesn't work, needs to be updated.
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 83269f7 in 45 seconds
More details
- Looked at
45
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
drafted comments based on config settings.
1. docs/faq/supported_llms/supported_llm.mdx:28
- Draft comment:
The model name should be consistent with the JavaScript example. Consider usingmodel="llama3.1"
for consistency. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable:
The difference in model names could be intentional - ":8b" likely specifies a specific model size variant. Making them match could actually be incorrect if they're meant to use different variants. Without deeper knowledge of Ollama's model naming conventions and requirements, suggesting a change could be misleading.
I might be underestimating the importance of consistency in documentation examples. Having different model names might confuse readers.
While consistency is valuable in docs, we can't assume the difference is a mistake - it could be intentionally showing different model variants. Making them match could give incorrect information.
Delete the comment. Without being certain about Ollama's model naming requirements, suggesting to change the model name could lead to incorrect information.
Workflow ID: wflow_pYiv0MFtiAgbEbcB
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Code Review SummaryThe changes in this PR improve the documentation by updating the Ollama integration example to use the latest OpenAI client pattern. Overall, the changes are well-structured and necessary. Positive Aspects:✅ Updates to modern OpenAI client initialization Suggestions for Improvement:
Overall Assessment:This is a good quality update that improves the documentation by providing a working, modern example of Ollama integration. The changes align with current best practices and make the code more maintainable. Rating: 8/10 - Good quality update with minor suggestions for improvement. |
Current example of Ollama usage doesn't work, needs to be updated.
Important
Update Python example for Ollama in
supported_llm.mdx
to fix non-working code.from openai import OpenAI
insupported_llm.mdx
.openai.base_url
andopenai.api_key
toOpenAI
client initialization.tools
retrieval to useAction.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER
.result = toolset.handle_tool_calls(response)
andprint(result)
to handle and display tool call results.This description was created by for 83269f7. It will automatically update as commits are pushed.