Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Context Shifting #5265

Closed
aarongerber opened this issue Jan 14, 2024 · 4 comments
Closed

Context Shifting #5265

aarongerber opened this issue Jan 14, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@aarongerber
Copy link

#4588 was closed as stale. As soon as you hit context limits, being able to toggle this on would be very nice. Right now, when doing longer sessions, I end up switching to KoboldCPP.

Below is the previous ticket's contents:
About 10 days ago, KoboldCpp added a feature called Context Shifting which is supposed to greatly reduce reprocessing. Here is their official description of the feature:

NEW FEATURE: Context Shifting (A.K.A. EvenSmarterContext) - This feature utilizes KV cache shifting to automatically remove old tokens from context and add new ones without requiring any reprocessing. So long as you use no memory/fixed memory and don't use world info, you should be able to avoid almost all reprocessing between consecutive generations even at max context. This does not consume any additional context space, making it superior to SmartContext.

Any chance this gets added to Ooba as well?

Additional Context

Reddit thread: https://www.reddit.com/r/LocalLLaMA/comments/17ni4hm/koboldcpp_v148_context_shifting_massively_reduced/
llama.cpp pull: ggerganov/llama.cpp#3228
kobold.cpp 1.48.1 release: https://github.com/LostRuins/koboldcpp/releases/tag/v1.48.1

@aarongerber aarongerber added the enhancement New feature or request label Jan 14, 2024
@berkut1
Copy link
Contributor

berkut1 commented Jan 15, 2024

@zaqhack
Copy link

zaqhack commented Jan 17, 2024

I feel like this is working/already in there? If you hit "regenerate," the processing time is a tiny fraction of the previous generation. Which makes me think something about this is working in there ... and we just need to figure out how to leverage it better.

@aarongerber
Copy link
Author

It shouldn't be automatic since there are potential side effects, and I don't see a setting for it yet. It was on a branch. Any update?

@oobabooga
Copy link
Owner

Implemented here: #5669

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants