-
Notifications
You must be signed in to change notification settings - Fork 15
[Named min timestamp leases] Acquire single lease per transaction-shaped batch #7385
Conversation
timelock-impl/src/main/java/com/palantir/atlasdb/timelock/AsyncTimelockServiceImpl.java
Show resolved
Hide resolved
TimestampLeaseName timestampName, UUID requestId, long timestamp) { | ||
AsyncLock lock = lockManager.getNamedTimestampLock(timestampName, timestamp); | ||
return lockAcquirer.acquireLocks(requestId, OrderedLocks.fromSingleLock(lock), TimeLimit.zero()); | ||
UUID requestId, Set<TimestampLeaseName> timestampNames, long timestamp) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a set because the ordering doesn't matter.
In HeldLocks, the list is just used to be iterated on in lock/unlock operations. And, the lock descriptors are returned as a set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, looks like we thought about the same thing. I agree it doesn't matter, but it definitely reads weird. So maybe to allay these concerns, add an explicit comment OR handle the ordering.
TimestampLeaseRequests: | ||
alias: map<TimestampLeaseName, TimestampLeaseRequest> | ||
fields: | ||
requestsId: RequestId |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
requestId
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
requestId! 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in one of the PRs that merged - might be worth to go synchronously on the final yml api
fields: | ||
requestsId: RequestId | ||
numFreshTimestamps: map<TimestampLeaseName, integer> | ||
TimestampLeasesRequest: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dunno if we should do something like:
- MultiClientAcquireTimestampLeasesRequest
- NamespaceAcquireTimestampLeasesRequest
- AcquireTimestampLeasesRequest
I think this sets the right hierarchy or Multi-Client -> Namespace -> Single request
timelock-impl/src/main/java/com/palantir/atlasdb/timelock/lock/AsyncLockService.java
Show resolved
Hide resolved
@@ -177,9 +179,11 @@ private AsyncResult<HeldLocks> acquireImmutableTimestampLock(UUID requestId, lon | |||
} | |||
|
|||
private AsyncResult<HeldLocks> acquireNamedTimestampLockInternal( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acquireNamedTimestampLocksInternal
In general, let's make sure you do a naming-alignment pass once we're done with all of the changes. I think we've iterated quite hard, so the whole chain is out of whack.
timelock-impl/src/main/java/com/palantir/atlasdb/timelock/AsyncTimelockServiceImpl.java
Show resolved
Hide resolved
.get(); | ||
long minLeased = lockService.getMinLeasedTimestamp(timestampName).orElse(timestamp); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a comment here that it's crucial that the timestamps here are AFTER the fresh timestamp we're locking on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some naming shenanigans, and cleanup
I put all the FLUPs in #7386 which I will merge soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Directionally in sync! Bunch of implementation concerns, and the Lease(s?)Response(s?) naming is tricky but we can decide that a bit more synchronously probably (and no need to block on that)
TimestampLeaseRequests: | ||
alias: map<TimestampLeaseName, TimestampLeaseRequest> | ||
fields: | ||
requestsId: RequestId |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
requestId! 😅
.collect(Collectors.toList()); | ||
} | ||
|
||
private static void assertThatTimestampsIsStrictlyWithinInvocationInterval( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pedantry: timestamps are strictly within.
Though 👍 on use of allSatisfy
- it gives better error messages than going forEach on the timestamps :)
timelock-impl/src/main/java/com/palantir/atlasdb/timelock/lock/AsyncLockService.java
Show resolved
Hide resolved
List<AsyncLock> locks = timestampNames.stream() | ||
.map(name -> lockManager.getNamedTimestampLock(name, timestamp)) | ||
.collect(Collectors.toList()); | ||
return lockAcquirer.acquireLocks(requestId, OrderedLocks.fromOrderedList(locks), TimeLimit.zero()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree this works as written because the lock descriptor has a fresh timestamp appended, and so there should never be contention. A comment here is IMO required though: if the locks provided here actually had contention this is not OK because of the possibility of deadlock.
(Curious if there's more context elsewhere!)
AsyncTimelockService service, TimestampLeaseName timestampName, TimestampLeaseRequest request) { | ||
return service.acquireTimestampLease(timestampName, request.getRequestId(), request.getNumFreshTimestamps()); | ||
private static ListenableFuture<TimestampLeaseResponses> acquireTimestampLease( | ||
AsyncTimelockService service, RequestId requestsId, Map<TimestampLeaseName, Integer> numFreshTimestamps) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pedantry: requestId
@@ -46,7 +46,7 @@ private RemotingMultiClientTimestampLeaseServiceAdapter(RemotingTimestampLeaseSe | |||
|
|||
ListenableFuture<MultiClientTimestampLeaseResponse> acquireTimestampLeases( | |||
MultiClientTimestampLeaseRequest requests, @Nullable RequestContext context) { | |||
Map<Namespace, ListenableFuture<TimestampLeaseResponses>> futures = KeyedStream.stream(requests.get()) | |||
Map<Namespace, ListenableFuture<TimestampLeasesResponse>> futures = KeyedStream.stream(requests.get()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice 👍
@@ -337,6 +338,17 @@ public void logState() { | |||
lockService.getLockWatchingService().logState(); | |||
} | |||
|
|||
private TimestampLeaseResponse getMinLeasedAndFreshTimestamps( | |||
TimestampLeaseName timestampName, int numFreshTimestamps, long lockedTimestamp) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the right factoring? There's value in calling getFreshTimestamps once for the N lease names and then portioning the timestamps out across the things that want it, after having gotten the min leased / locked timestamps for each of the namespaces in that we save a bunch of volatile reads and writes - normally I wouldn't care as much, but this is timelock...
timelock-impl/src/main/java/com/palantir/atlasdb/timelock/AsyncTimelockServiceImpl.java
Show resolved
Hide resolved
MultiClientTimestampLeaseResponse: | ||
alias: map<Namespace, TimestampLeaseResponses> | ||
alias: map<Namespace, TimestampLeasesResponse> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oof, yeah this is a bit tricky to follow. We have LeasesResponse having a bunch of LeaseResponses. Also, I do think LeasesResponse should probably be the internal and this one should be LeasesResponses. Though if you changed the naming later would be curious to see what you ended up with.
General
Before this PR:
After this PR:
API changes for timestamp leases.
==COMMIT_MSG==
==COMMIT_MSG==
Priority:
P1
Concerns / possible downsides (what feedback would you like?):
See PR comments
Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
Yes. API unused.
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
No.
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
No
Does this PR need a schema migration?
No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
N/A
What was existing testing like? What have you done to improve it?:
Changed
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
N/A
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
See PR comments
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
No op
Has the safety of all log arguments been decided correctly?:
Yes
Will this change significantly affect our spending on metrics or logs?:
No
How would I tell that this PR does not work in production? (monitors, etc.):
No op
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
Rollback
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
No
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
No
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
No
Development Process
Where should we start reviewing?:
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
Please tag any other people who should be aware of this PR:
@jeremyk-91
@raiju