Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom matmul attention #213

Closed
wants to merge 5 commits into from
Closed

Conversation

ngc92
Copy link
Contributor

@ngc92 ngc92 commented Apr 22, 2024

My own implementation of (lower-triangular) matrix multiplication.
It is not as efficient as CuBLAS, but since we only calculate half as many numbers, it is a net win.

Cannot get rid of permute yet, because v is still needed later in the kernel. Will look into replacing the second matmul too, so we can capitalize on not needing permutes (might require changes to backward, too).

@ngc92 ngc92 force-pushed the custom-matmul-attention branch 2 times, most recently from 156fe48 to 3fb586c Compare April 22, 2024 11:45
@ngc92 ngc92 force-pushed the custom-matmul-attention branch from 64686b6 to b43b84b Compare April 22, 2024 17:27
@ngc92
Copy link
Contributor Author

ngc92 commented Apr 27, 2024

this was never intended to be actually merged, just to demonstrate how the custom matmul would look like. Now that it is full of conflicts, let's keep the number of open PRs to a reasonable level.

@ngc92 ngc92 closed this Apr 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant