Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Does it supports quantization-matrix-mul? #2044

Open
bianxuxuxu opened this issue Jan 17, 2025 · 0 comments
Open

[FEA] Does it supports quantization-matrix-mul? #2044

bianxuxuxu opened this issue Jan 17, 2025 · 0 comments
Labels

Comments

@bianxuxuxu
Copy link

The mixed-dtype-gemm example supports upcasting from a narrower (fewer bits) to a wider (more bits) type, but I need a quantization gemm which is from a wider to a narrow type.
For example, fp16xfp8 mm, we need do fp16 quantized to fp8 firstly(fp16/quant_scale, the quant_scale is provided), then do the fp8xfp8 gemm. How to do this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant