-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ListenableAsyncCloseable#onClosing() #2430
Conversation
@@ -44,6 +44,9 @@ private CloseUtils() { | |||
* @param closingStarted a {@link CountDownLatch} to notify | |||
*/ | |||
static void onGracefulClosureStarted(ConnectionContext cc, CountDownLatch closingStarted) { | |||
// cc.onClosing() will trigger on the leading edge of closure, which maybe when the user calls closeAsync(). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was an interesting observation... if we needed to differentiate between "any trigger for closing" (which is what onClosing
is) and "transport/protocol is closing" we could add another method in the future, but currently this seems to be mostly a concern in tests.
b04a0c5
to
0c1eeef
Compare
Motivation: It is useful to get a notified on the leading edge when closing beings. This may drive decisions like stopping to use a connection or resources that is being closed and not suitable for reuse. Modifications: - Use existing transport mechansims to drive onClosing for connections. - Remove NettyConnectionContext override of onClosing and defer to the base class.
0c1eeef
to
cfd0bf0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching other cases when we miss overriding a default
method!
Everything looks good. However, there are a couple more cases that need an override for onClosing()
:
- AbstractNettyIoExecutor
- ClientFactory in generated gRPC code
Also, please apply this patch:
Index: servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java b/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java
--- a/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java (revision cfd0bf0cbf7d299d87076e714c32ec2c55ad1da9)
+++ b/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java (date 1669842880284)
@@ -101,6 +101,7 @@
*
* @return a {@link Completable} that notifies when the connection has begun its closing sequence.
*/
+ @Override
public final Completable onClosing() {
return fromSource(onClosing);
}
@@ -162,7 +163,7 @@
return onClose;
}
- final Completable onCloseNoOffload() {
+ private Completable onCloseNoOffload() {
return onCloseNoOffload;
}
@@ -118,6 +128,11 @@ public boolean isIoThreadSupported() { | |||
return delegate.isIoThreadSupported(); | |||
} | |||
|
|||
@Override | |||
public BooleanSupplier shouldOffloadSupplier() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎖
return Completable.defer(() -> { | ||
maxConcurrencyProcessor.onNext(ZERO_MAX_CONCURRENCY_EVENT); | ||
maxConcurrencyProcessor.onComplete(); | ||
return parentContext.closeAsync().shareContextOnSubscribe(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great catch :)
This was intentional as there isn't a reliable leading edge trigger for |
Thanks! |
Motivation: apple#2430 moved `onClosing()` method from internal `NettyConnectionContext` to public `ListenableAsyncCloseable` interface. While moving a method upward in the class hierarchy should be a binary compatible change, let's preserve it for 0.42.x for extra cautious.
Motivation: #2430 moved `onClosing()` method from internal `NettyConnectionContext` to public `ListenableAsyncCloseable` interface. While moving a method upward in the class hierarchy should be a binary compatible change, let's preserve it for 0.42.x for extra cautious.
…onClosing() Motivation: apple#2430 adds `ListenableAsyncCloseable.onClosing()` method. Its implementations require allocation of a `CompletableProcessor`. Because we wrap channel's `EventLoop` with `IoExecutor` on every new connection and every new HTTP/2 stream, it caused significant increase of allocations and pressure for GC. As the result, latency increased for all HTTP/2 and gRPC use-cases. Modifications: - Add `ExecutionContextUtils` that adapt `EventLoop` to `IoExecutor` API for `ExecutionContext` and cache its value in `FastThreadLocal` to reduce allocations; - Make sure that HTTP/1.x and parent HTTP/2 connections use `ExecutionContextUtils`; - For each new HTTP/2 stream (child channel) take execution context from the parent channel because child channel uses the same `EventLoop` as its parent channel; - Add a new `DefaultNettyConnection.initChannel` overload that takes already constructed `ExecutionContext`; - Adjust existing tests for new API; Result: Significantly reduced allocation rate and less GC pauses for HTTP/2 and gRPC.
….onClosing()` (#2473) Motivation: #2430 adds `ListenableAsyncCloseable.onClosing()` method. Its implementations require allocation of a `CompletableProcessor`. Because we wrap channel's `EventLoop` with `IoExecutor` on every new connection and every new HTTP/2 stream, it caused significant increase of allocations and pressure for GC. As the result, latency increased for all HTTP/2 and gRPC use-cases. Modifications: - Add `ExecutionContextUtils` that adapt `EventLoop` to `IoExecutor` API for `ExecutionContext` and cache its value in `FastThreadLocal` to reduce allocations; - Make sure that HTTP/1.x and parent HTTP/2 connections use `ExecutionContextUtils`; - For each new HTTP/2 stream (child channel) take execution context from the parent channel because child channel uses the same `EventLoop` as its parent channel; - Add a new `DefaultNettyConnection.initChannel` overload that takes already constructed `ExecutionContext`; - Adjust existing tests for new API; Result: Significantly reduced allocation rate and less GC pauses for HTTP/2 and gRPC.
….onClosing()` (#2473) Motivation: #2430 adds `ListenableAsyncCloseable.onClosing()` method. Its implementations require allocation of a `CompletableProcessor`. Because we wrap channel's `EventLoop` with `IoExecutor` on every new connection and every new HTTP/2 stream, it caused significant increase of allocations and pressure for GC. As the result, latency increased for all HTTP/2 and gRPC use-cases. Modifications: - Add `ExecutionContextUtils` that adapt `EventLoop` to `IoExecutor` API for `ExecutionContext` and cache its value in `FastThreadLocal` to reduce allocations; - Make sure that HTTP/1.x and parent HTTP/2 connections use `ExecutionContextUtils`; - For each new HTTP/2 stream (child channel) take execution context from the parent channel because child channel uses the same `EventLoop` as its parent channel; - Add a new `DefaultNettyConnection.initChannel` overload that takes already constructed `ExecutionContext`; - Adjust existing tests for new API; Result: Significantly reduced allocation rate and less GC pauses for HTTP/2 and gRPC.
Motivation:
It is useful to get a notified on the leading edge when closing beings. This may drive decisions like stopping to use a connection or resources that is being closed and not suitable for reuse.
Modifications: