-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated documentation for models section #1567
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @1sarthakbhardwaj ! PR naming need to follow standard here: https://github.com/camel-ai/camel/blob/master/CONTRIBUTING.md#pull-request-item-stage
Left some comments below
|
||
## 2. Supported Model Platforms in CAMEL | ||
|
||
CAMEL supports a wide range of models, including [OpenAI’s GPT series](https://platform.openai.com/docs/models), [Meta’s Llama models](https://www.llama.com/), [DeepSeek's R1](https://www.deepseek.com/), and more. The table below lists all supported model platforms: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for deepseek not only R1 is supported
| **SAMBA** | Meta-Llama-3.1-8B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-405B-Instruct | | ||
| **SGLANG** | meta-llama/Meta-Llama-3.1-8B-Instruct, meta-llama/Meta-Llama-3.1-70B-Instruct, meta-llama/Meta-Llama-3.1-405B-Instruct, meta-llama/Llama-3.2-1B-Instruct, mistralai/Mistral-Nemo-Instruct-2407, mistralai/Mistral-7B-Instruct-v0.3, Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-32B-Instruct, Qwen/Qwen2.5-72B-Instruct | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for the list it's not comprehensive, some models we didn't list under enum list but user can call the model by directly passing a string value to model_type
- OpenAI GPT-4O Mini: Fast and efficient. | ||
- SambaNova’s Llama 405B: High capacity but slower response. | ||
- Local Inference: SGLang reached a peak of 220.98 tokens per second, compared to vLLM’s 107.2 tokens per second. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please keep the original Key Insight part, we didn't aim to compare the speed of different model platforms, the insight more highlighted the relationship between model size and inference time
Description
Describe your changes in detail.
Descriptive Headings & Structure:
The headings now clearly outline the content hierarchy (e.g., "Supported Model Platforms in CAMEL" and "How to Use Models via API Calls"), which helps both search engines and users navigate the document.
Improved Readability:
The language has been made more concise and action-oriented.
Instructions are laid out in clear, step-by-step formats.
The use of bullet points and tables makes it easier to scan key information quickly.
Enhanced Visual Elements:
The image now includes descriptive alt text, which improves accessibility and image SEO.
Call-to-Action & Next Steps:
Added prompts like “Explore the Code” and a “Next Steps” section that guide users to further explore our documentation and related content.
Motivation and Context
Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax
close #15213
if this solves the issue #15213Types of changes
What types of changes does your code introduce? Put an
x
in all the boxes that apply:Implemented Tasks
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!