-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Context Shifting #5265
Comments
Everything depends on https://github.com/abetlen/llama-cpp-python Hm... I see there are some functions https://github.com/abetlen/llama-cpp-python/blob/359ae736432cae5cdba50b011839d277a4b1ec8d/llama_cpp/llama.py#L450 |
I feel like this is working/already in there? If you hit "regenerate," the processing time is a tiny fraction of the previous generation. Which makes me think something about this is working in there ... and we just need to figure out how to leverage it better. |
It shouldn't be automatic since there are potential side effects, and I don't see a setting for it yet. It was on a branch. Any update? |
Implemented here: #5669 |
#4588 was closed as stale. As soon as you hit context limits, being able to toggle this on would be very nice. Right now, when doing longer sessions, I end up switching to KoboldCPP.
Below is the previous ticket's contents:
About 10 days ago, KoboldCpp added a feature called Context Shifting which is supposed to greatly reduce reprocessing. Here is their official description of the feature:
NEW FEATURE: Context Shifting (A.K.A. EvenSmarterContext) - This feature utilizes KV cache shifting to automatically remove old tokens from context and add new ones without requiring any reprocessing. So long as you use no memory/fixed memory and don't use world info, you should be able to avoid almost all reprocessing between consecutive generations even at max context. This does not consume any additional context space, making it superior to SmartContext.
Any chance this gets added to Ooba as well?
Additional Context
Reddit thread: https://www.reddit.com/r/LocalLLaMA/comments/17ni4hm/koboldcpp_v148_context_shifting_massively_reduced/
llama.cpp pull: ggerganov/llama.cpp#3228
kobold.cpp 1.48.1 release: https://github.com/LostRuins/koboldcpp/releases/tag/v1.48.1
The text was updated successfully, but these errors were encountered: