Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated documentation for models section #1567

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

1sarthakbhardwaj
Copy link
Collaborator

Description

Describe your changes in detail.

Inline Links:
Added inline links to external resources such as OpenAI’s GPT series, Meta’s Llama models, and DeepSeek's R1. This enriches the content for readers and also improves our SEO by linking to authoritative sources.

Descriptive Headings & Structure:
The headings now clearly outline the content hierarchy (e.g., "Supported Model Platforms in CAMEL" and "How to Use Models via API Calls"), which helps both search engines and users navigate the document.

Improved Readability:

The language has been made more concise and action-oriented.
Instructions are laid out in clear, step-by-step formats.
The use of bullet points and tables makes it easier to scan key information quickly.

Enhanced Visual Elements:
The image now includes descriptive alt text, which improves accessibility and image SEO.

Call-to-Action & Next Steps:
Added prompts like “Explore the Code” and a “Next Steps” section that guide users to further explore our documentation and related content.

Motivation and Context

Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax close #15213 if this solves the issue #15213

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Put an x in all the boxes that apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of example)

Implemented Tasks

  • Subtask 1
  • Subtask 2
  • Subtask 3

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide. (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly. (required for a bug fix or a new feature)
  • I have updated the documentation accordingly.

Copy link
Member

@Wendong-Fan Wendong-Fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @1sarthakbhardwaj ! PR naming need to follow standard here: https://github.com/camel-ai/camel/blob/master/CONTRIBUTING.md#pull-request-item-stage

Left some comments below


## 2. Supported Model Platforms in CAMEL

CAMEL supports a wide range of models, including [OpenAI’s GPT series](https://platform.openai.com/docs/models), [Meta’s Llama models](https://www.llama.com/), [DeepSeek's R1](https://www.deepseek.com/), and more. The table below lists all supported model platforms:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for deepseek not only R1 is supported

Comment on lines +28 to +29
| **SAMBA** | Meta-Llama-3.1-8B-Instruct, Meta-Llama-3.1-70B-Instruct, Meta-Llama-3.1-405B-Instruct |
| **SGLANG** | meta-llama/Meta-Llama-3.1-8B-Instruct, meta-llama/Meta-Llama-3.1-70B-Instruct, meta-llama/Meta-Llama-3.1-405B-Instruct, meta-llama/Llama-3.2-1B-Instruct, mistralai/Mistral-Nemo-Instruct-2407, mistralai/Mistral-7B-Instruct-v0.3, Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-32B-Instruct, Qwen/Qwen2.5-72B-Instruct |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the list it's not comprehensive, some models we didn't list under enum list but user can call the model by directly passing a string value to model_type

Comment on lines +210 to +212
- OpenAI GPT-4O Mini: Fast and efficient.
- SambaNova’s Llama 405B: High capacity but slower response.
- Local Inference: SGLang reached a peak of 220.98 tokens per second, compared to vLLM’s 107.2 tokens per second.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please keep the original Key Insight part, we didn't aim to compare the speed of different model platforms, the insight more highlighted the relationship between model size and inference time

@Wendong-Fan Wendong-Fan added documentation Improvements or additions to documentation enhancement New feature or request labels Feb 9, 2025
@Wendong-Fan Wendong-Fan added this to the Sprint 22 milestone Feb 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

2 participants