-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Prefix Caching raises error #273
Comments
I made a pull request few weeks ago when flat pa wasn't ported to habana_main yet. I am currently rebasing my PR to current habana_main. You will be able to use prefix caching after this PR gets merged. Please take a look if you are interested. |
You are a god sent. The code works well. However I have noticed theres an error with the cache when lazy mode is disabled.
Output:
|
Thank you for letting me know, I haven't check the cases when lazy mode is disabled yet. I'll check if error persists after the rebase and fix it if possible. Also, please note that my PR may not work without enforce_eager=True as I haven't checked for HpuGraph compatibilities either. |
Fixed in #162 |
Your current environment
🐛 Describe the bug
VLLM generate raises attribute error if I enable prefix caching. Is it not supported?
Output:
The text was updated successfully, but these errors were encountered: