Skip to content

[Kernel] add triton fused moe kernel for gptq/awq #706

[Kernel] add triton fused moe kernel for gptq/awq

[Kernel] add triton fused moe kernel for gptq/awq #706

Triggered via pull request January 26, 2025 02:16
Status Failure
Total duration 4m 33s
Artifacts

pre-commit.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Ruff (E501): vllm/model_executor/layers/quantization/moe_wna16.py#L174
vllm/model_executor/layers/quantization/moe_wna16.py:174:81: E501 Line too long (87 > 80)
pre-commit
Process completed with exit code 1.