-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement 3.0 api #91
Comments
update: we need to finish 2.x commands, bits commands, watch/unwatch commands. |
Lets start with watch & unwatch How Redis does itThe whole management is really simple:
Some more thoughts
What do you think? |
I do think it's a minor feature. You can see what we currently store inside a db, see The only complexity I see is that storing connections directly inside The simpler solution will be is that WATCH will indeed register some keys for tracking in the appropriate DbTables. And then during EVAL we will do an additional transactional "hop" that queries DbTables regarding the state of those keys. |
Thinking about it a bit more - you will have to clean up DbTable's watched table upon connection closure - i.e. there will be some interaction between a connection lifetime and DbTable's state. So you can still store a pointer to connections inside a DbTable but you can only reference there thread-safe structures or, alternatively, just use it as a descriptor without dereferencing it. |
The purpose of DbTable's watched table is that EXEC doesn't have to query its own keys. However, it still has to unregister all its keys (it does on the first watch fail). What is more, EXEC directly checks for expirations of all of them 🤔 So in the end it has to query all keys at least once. It seems like with a few watched keys, the
Sure, I thought of adding some atomic flag or making |
Lets forget about fast-fail, and complexity of going over all the keys right now. Assume that the watched list is short. Are you saying that we can store all the info in connection and get done with it? |
There is obviously no way for it
Yes. If we use simple version codes, we might miss a Besides, Redis doesn't even bother to use a map for the connections data. Watch checks are O(k). Or is there some issue I'm not seeing through? |
So thread T1 holds a connection C that has a watchmap with key |
Forget multi-threading - how a single-threaded system will know to search for C ? |
There is no double-sided dependency. The key just cares for itself, but it stores a timestamp with its own last update time (we'll have to add it at some point to support OBJECT IDLETIME). WATCH just stores the timestamp TK of some key K at the time of call in C, meaning "I still expect K to have timestamp TK when I start EXEC". If TK changed since WATCH, then the key was modified. So the values in C don't change - they serve as kind of a snapshot to compare against on EXEC |
So you suggest storing the last modified timestamp for every entry in DF inside DbTable ? or just for watched keys inside DbTable? |
Yes, for all. We can't store them selectively without query overhead |
So my problem with this is that for a pretty minor feature we need to add space overhead of 8 bytes per key. Therefore, I think trading implementation complexity for reducing the overhead in memory is well worth it. Regarding the |
Oh, okay. That answers all of my questions above. If Dragonfly doesn't intend to support IDLETIME, then adding memory overhead just for WATCH isn't worth it. |
Closing as done. Will open a dedicated task for TOUCH |
if we look at commands.json
jq -c 'to_entries[] | select(.value.since | test("^3\\..*"))' commands.json | grep -v '"cluster"' | grep -v '"geo"' | less
we will see that in besides some debugging/configuration/bitops commands we have only
TOUCH
command that needs to be implemented. Dragonfly implements its cache a bit different from Redis but it still has notion of "upgrading" the entry to a higher priority.It could be that in
dragonfly
,touch
andexist
are almost identical. need to see if there are any differences in the output or actual behavior.The text was updated successfully, but these errors were encountered: