-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ft: ARSN-388 implement GapSet (caching of listing gaps) #2211
Conversation
Hello jonathan-gramain,My role is to assist you with the merge of this Status report is not available. |
Request integration branchesWaiting for integration branch creation to be requested by the user. To request integration branches, please comment on this pull request with the following command:
Alternatively, the |
b478488
to
0066a29
Compare
lib/algos/cache/GapSet.ts
Outdated
weight: 0, | ||
}; | ||
// there may be an existing gap starting with 'lastKey': delete it first | ||
this._gaps.delete(gap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we risk deleting some data from the structure here? What about if the new gap inserted actually spans multiple known chained gaps? e.g.
NewGap: +1234+
GapSet: +123+456+789+
We would end up deleting the second gap in the GapSet and then go on to lose keys 5 and 6.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I will add a lookup on the chained gap, and if it exists, return it instead of deleting it (it may still be merged with following gaps by setGap
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback @fredmnl !
I updated the PR with new commits that should address your comments.
lib/algos/cache/GapSet.ts
Outdated
weight: 0, | ||
}; | ||
// there may be an existing gap starting with 'lastKey': delete it first | ||
this._gaps.delete(gap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I will add a lookup on the chained gap, and if it exists, return it instead of deleting it (it may still be merged with following gaps by setGap
)
One question on parallelism, I'm still new to Node so maybe all potential race conditions are handled by the mono-threaded nature of node. Could we have the following situation for example:
Basically a concurrent list and PUT operation will not update the data store in the right order. Is this possible? |
Yes you're totally correct, there is a race condition that we need to address in some way. I started thinking about this yesterday and potential solutions to it. Updating the cache after every listed key would not be enough as we can always have updates on or between listed keys (i.e. in the inclusive range between two listed keys). My current idea is that the Another idea can be to track all listings in progress, and push invalidation to each listing in progress when The funny thing is originally I though of an API for |
So I came up with another approach to ensure atomicity, this PR should explain it: #2213 |
720440e
to
94e9ed6
Compare
mergedWeightSum += weightToMerge; | ||
} | ||
// merge 'nextGap' into 'curGap' | ||
curGap.lastKey = nextGap.lastKey; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that's still bugging me is that we are checking in _lookupOrCreateGap
if there is a gap straddling newGap.firstKey
and we are handling it depending on the weights. Yet, when we come to the gap that may be straddling newGap.lastKey
we don't perform the same check. If I'm correct we could be extending a gap to an arbitrarily large weight if we are only extending it toward lower values (a workload that would delete objects with ever decreasing keys).
If all of this is correct, the logic could be summarized as:
- look for a gap straddling
newGap.firstKey
and make a decision as to whether we want to extend it, or split. - look for a gap straddling
newGap.lastKey
and make a decision as to whether we want to extend it, or split (which may depend on the previous decision) - with the boundaries handled, remove all of the gaps that are fully contained within
newGap
and insertnewGap
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is indeed the issue that you mentioned, it's probably what I observed too during my testing.
I am reworking this and should be able to finish on Monday, will let you know when it's done! I think the logic that you describe is good, there is some complexity in checking all cases and setting the weights accordingly but it should be achievable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reworked this in a new commit
851d2bf
to
3fdcc47
Compare
bf36885
to
d7ba8ea
Compare
The GapSet class is intended for caching listing "gaps", which are contiguous series of current delete markers in buckets, although the semantics can allow for other uses in the future. The end goal is to increase the performance of listings on V0 buckets when a lot of delete markers are present, as a temporary solution until buckets are migrated to V1 format. This data structure is intented to be used by a GapCache instance, which implements specific caching semantics (to ensure consistency wrt. DB updates for example).
d7ba8ea
to
6d6f186
Compare
/approve |
ConflictA conflict has been raised during the creation of I have not created the integration branch. Here are the steps to resolve this conflict: $ git fetch
$ git checkout -B w/8.1/feature/ARSN-388-gapSet origin/development/8.1
$ git merge origin/feature/ARSN-388-gapSet
$ # <intense conflict resolution>
$ git commit
$ git push -u origin w/8.1/feature/ARSN-388-gapSet The following options are set: approve |
In the queueThe changeset has received all authorizations and has been added to the The changeset will be merged in:
The following branches will NOT be impacted:
There is no action required on your side. You will be notified here once IMPORTANT Please do not attempt to modify this pull request.
If you need this pull request to be removed from the queue, please contact a The following options are set: approve |
I have successfully merged the changeset of this pull request
The following branches have NOT changed:
Please check the status of the associated issue ARSN-388. Goodbye jonathan-gramain. |
The GapSet class is intended for caching listing "gaps", which are contiguous series of current delete markers in buckets, although the semantics can allow for other uses in the future.
The end goal is to increase the performance of listings on V0 buckets when a lot of delete markers are present, as a temporary solution until buckets are migrated to V1 format.
This data structure is intented to be used by a GapCache instance, which implements specific caching semantics (to ensure consistency wrt. DB updates for example).