We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/modules/mha.py#L461 Hi exports , I saw in flash-attention MHA implementation don't support key_padding_mask feature, If we want to support it , do flash-attention have a API? or how can we do it with flash-attention?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
https://github.com/Dao-AILab/flash-attention/blob/main/flash_attn/modules/mha.py#L461
Hi exports , I saw in flash-attention MHA implementation don't support key_padding_mask feature, If we want to support it , do flash-attention have a API? or how can we do it with flash-attention?
The text was updated successfully, but these errors were encountered: