Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance, Hardware] MoE tuning on AMD MI300x GPUs #1554

Merged
merged 1 commit into from
Oct 2, 2024

Conversation

kkHuang-amd
Copy link
Contributor

@kkHuang-amd kkHuang-amd commented Oct 2, 2024

Motivation

Optimize MoE kernel performance for AMD platform

Modifications

Add configuration file

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

Copy link
Collaborator

@HaiShaw HaiShaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kkHuang-amd LGTM. Thanks for the great work!

@HaiShaw
Copy link
Collaborator

HaiShaw commented Oct 2, 2024

@merrymercy @Ying1123 Would you please have a review?

@merrymercy merrymercy merged commit 8cdc76f into sgl-project:main Oct 2, 2024
1 of 10 checks passed
@merrymercy
Copy link
Contributor

@kkHuang-amd Thanks for the contribution. Can you also take a look at attention kernels https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/layers/attention/triton_ops?

@HaiShaw
Copy link
Collaborator

HaiShaw commented Oct 2, 2024

@merrymercy my preliminary parameters search for attention/triton_ops did not lead to much better results from current settings. I hope @kkHuang-amd find it different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants