Skip to content
This repository has been archived by the owner on Nov 14, 2024. It is now read-only.

[Named min timestamp leases] Acquire single lease per transaction-shaped batch #7385

Merged
merged 1 commit into from
Oct 21, 2024

Conversation

ergo14
Copy link
Contributor

@ergo14 ergo14 commented Oct 21, 2024

General

Before this PR:

After this PR:
API changes for timestamp leases.
==COMMIT_MSG==
==COMMIT_MSG==

Priority:
P1
Concerns / possible downsides (what feedback would you like?):
See PR comments
Is documentation needed?:

Compatibility

Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
Yes. API unused.
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
No.
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
No
Does this PR need a schema migration?
No

Testing and Correctness

What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
N/A
What was existing testing like? What have you done to improve it?:
Changed
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
N/A
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
See PR comments

Execution

How would I tell this PR works in production? (Metrics, logs, etc.):
No op
Has the safety of all log arguments been decided correctly?:
Yes
Will this change significantly affect our spending on metrics or logs?:
No
How would I tell that this PR does not work in production? (monitors, etc.):
No op
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
Rollback
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):

Scale

Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
No
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
No
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
No

Development Process

Where should we start reviewing?:

If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:

Please tag any other people who should be aware of this PR:
@jeremyk-91
@raiju

TimestampLeaseName timestampName, UUID requestId, long timestamp) {
AsyncLock lock = lockManager.getNamedTimestampLock(timestampName, timestamp);
return lockAcquirer.acquireLocks(requestId, OrderedLocks.fromSingleLock(lock), TimeLimit.zero());
UUID requestId, Set<TimestampLeaseName> timestampNames, long timestamp) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a set because the ordering doesn't matter.

In HeldLocks, the list is just used to be iterated on in lock/unlock operations. And, the lock descriptors are returned as a set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, looks like we thought about the same thing. I agree it doesn't matter, but it definitely reads weird. So maybe to allay these concerns, add an explicit comment OR handle the ordering.

@ergo14 ergo14 marked this pull request as ready for review October 21, 2024 07:58
@ergo14 ergo14 changed the title Acquire single timestamp lease per transaction-shaped batch [Named min timestamp Leases] Acquire single lease per transaction-shaped batch Oct 21, 2024
TimestampLeaseRequests:
alias: map<TimestampLeaseName, TimestampLeaseRequest>
fields:
requestsId: RequestId
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

requestId

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

requestId! 😅

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in one of the PRs that merged - might be worth to go synchronously on the final yml api

fields:
requestsId: RequestId
numFreshTimestamps: map<TimestampLeaseName, integer>
TimestampLeasesRequest:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dunno if we should do something like:

  • MultiClientAcquireTimestampLeasesRequest
  • NamespaceAcquireTimestampLeasesRequest
  • AcquireTimestampLeasesRequest

I think this sets the right hierarchy or Multi-Client -> Namespace -> Single request

@@ -177,9 +179,11 @@ private AsyncResult<HeldLocks> acquireImmutableTimestampLock(UUID requestId, lon
}

private AsyncResult<HeldLocks> acquireNamedTimestampLockInternal(
Copy link
Contributor

@jkozlowski jkozlowski Oct 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

acquireNamedTimestampLocksInternal

In general, let's make sure you do a naming-alignment pass once we're done with all of the changes. I think we've iterated quite hard, so the whole chain is out of whack.

.get();
long minLeased = lockService.getMinLeasedTimestamp(timestampName).orElse(timestamp);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a comment here that it's crucial that the timestamps here are AFTER the fresh timestamp we're locking on.

Copy link
Contributor

@jkozlowski jkozlowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some naming shenanigans, and cleanup

@bulldozer-bot bulldozer-bot bot merged commit aa8665f into develop Oct 21, 2024
21 checks passed
@ergo14
Copy link
Contributor Author

ergo14 commented Oct 21, 2024

I put all the FLUPs in #7386 which I will merge soon.

@ergo14 ergo14 changed the title [Named min timestamp Leases] Acquire single lease per transaction-shaped batch [Named min timestamp leases] Acquire single lease per transaction-shaped batch Oct 22, 2024
Copy link
Contributor

@jeremyk-91 jeremyk-91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Directionally in sync! Bunch of implementation concerns, and the Lease(s?)Response(s?) naming is tricky but we can decide that a bit more synchronously probably (and no need to block on that)

TimestampLeaseRequests:
alias: map<TimestampLeaseName, TimestampLeaseRequest>
fields:
requestsId: RequestId
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

requestId! 😅

.collect(Collectors.toList());
}

private static void assertThatTimestampsIsStrictlyWithinInvocationInterval(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pedantry: timestamps are strictly within.

Though 👍 on use of allSatisfy - it gives better error messages than going forEach on the timestamps :)

List<AsyncLock> locks = timestampNames.stream()
.map(name -> lockManager.getNamedTimestampLock(name, timestamp))
.collect(Collectors.toList());
return lockAcquirer.acquireLocks(requestId, OrderedLocks.fromOrderedList(locks), TimeLimit.zero());
Copy link
Contributor

@jeremyk-91 jeremyk-91 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree this works as written because the lock descriptor has a fresh timestamp appended, and so there should never be contention. A comment here is IMO required though: if the locks provided here actually had contention this is not OK because of the possibility of deadlock.

(Curious if there's more context elsewhere!)

AsyncTimelockService service, TimestampLeaseName timestampName, TimestampLeaseRequest request) {
return service.acquireTimestampLease(timestampName, request.getRequestId(), request.getNumFreshTimestamps());
private static ListenableFuture<TimestampLeaseResponses> acquireTimestampLease(
AsyncTimelockService service, RequestId requestsId, Map<TimestampLeaseName, Integer> numFreshTimestamps) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pedantry: requestId

@@ -46,7 +46,7 @@ private RemotingMultiClientTimestampLeaseServiceAdapter(RemotingTimestampLeaseSe

ListenableFuture<MultiClientTimestampLeaseResponse> acquireTimestampLeases(
MultiClientTimestampLeaseRequest requests, @Nullable RequestContext context) {
Map<Namespace, ListenableFuture<TimestampLeaseResponses>> futures = KeyedStream.stream(requests.get())
Map<Namespace, ListenableFuture<TimestampLeasesResponse>> futures = KeyedStream.stream(requests.get())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice 👍

@@ -337,6 +338,17 @@ public void logState() {
lockService.getLockWatchingService().logState();
}

private TimestampLeaseResponse getMinLeasedAndFreshTimestamps(
TimestampLeaseName timestampName, int numFreshTimestamps, long lockedTimestamp) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the right factoring? There's value in calling getFreshTimestamps once for the N lease names and then portioning the timestamps out across the things that want it, after having gotten the min leased / locked timestamps for each of the namespaces in that we save a bunch of volatile reads and writes - normally I wouldn't care as much, but this is timelock...

MultiClientTimestampLeaseResponse:
alias: map<Namespace, TimestampLeaseResponses>
alias: map<Namespace, TimestampLeasesResponse>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oof, yeah this is a bit tricky to follow. We have LeasesResponse having a bunch of LeaseResponses. Also, I do think LeasesResponse should probably be the internal and this one should be LeasesResponses. Though if you changed the naming later would be curious to see what you ended up with.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants