Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible for your team to implement xformers.ops.memory_efficient_attention? #24

Open
radna0 opened this issue Jul 11, 2024 · 5 comments
Assignees

Comments

@radna0
Copy link

radna0 commented Jul 11, 2024

No description provided.

@iclementine
Copy link
Collaborator

Thank you. I think the main difference between our implementation of flash_attn is that it takes an extra input, the attention bias. We can take a while to add this feature.

@iclementine iclementine self-assigned this Jul 15, 2024
@radna0
Copy link
Author

radna0 commented Jul 15, 2024

Thank you @iclementine! Will it take a long time for you to implement this? I'm trying to run this on AMD GPUs and have had some success with HIP backend of Triton. Do you think it's possible to run on both Nvidia and AMD? What about performance difference?

@iclementine
Copy link
Collaborator

I don't have a AMD GPU. Maybe there are some issues to run it on triton with other backends(some configs exceeding resource limits, some passed are not supported, etc). If you have some modifications to make it run on AMD GPUS, please inform us. Thank you.

I would take about 1~2 weeks to implement this, regarding my current plans.

@radna0
Copy link
Author

radna0 commented Aug 4, 2024

Hi @iclementine. Were you able to implement it? I will check on the configuration for AMD GPUs

@iclementine
Copy link
Collaborator

Sorry for that, I was working on other projects and will be occupied recently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants