Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ListenableAsyncCloseable#onClosing() #2430

Merged
merged 3 commits into from
Dec 1, 2022
Merged

Conversation

Scottmitch
Copy link
Member

Motivation:
It is useful to get a notified on the leading edge when closing beings. This may drive decisions like stopping to use a connection or resources that is being closed and not suitable for reuse.

Modifications:

  • Use existing transport mechansims to drive onClosing for connections.
  • Remove NettyConnectionContext override of onClosing and defer to the base class.

@@ -44,6 +44,9 @@ private CloseUtils() {
* @param closingStarted a {@link CountDownLatch} to notify
*/
static void onGracefulClosureStarted(ConnectionContext cc, CountDownLatch closingStarted) {
// cc.onClosing() will trigger on the leading edge of closure, which maybe when the user calls closeAsync().
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was an interesting observation... if we needed to differentiate between "any trigger for closing" (which is what onClosing is) and "transport/protocol is closing" we could add another method in the future, but currently this seems to be mostly a concern in tests.

Motivation:
It is useful to get a notified on the leading edge when closing beings.
This may drive decisions like stopping to use a connection or resources
that is being closed and not suitable for reuse.

Modifications:
- Use existing transport mechansims to drive onClosing for connections.
- Remove NettyConnectionContext override of onClosing and defer to the base class.
Copy link
Member

@idelpivnitskiy idelpivnitskiy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching other cases when we miss overriding a default method!
Everything looks good. However, there are a couple more cases that need an override for onClosing():

  • AbstractNettyIoExecutor
  • ClientFactory in generated gRPC code

Also, please apply this patch:

Index: servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java b/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java
--- a/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java	(revision cfd0bf0cbf7d299d87076e714c32ec2c55ad1da9)
+++ b/servicetalk-transport-netty-internal/src/main/java/io/servicetalk/transport/netty/internal/NettyChannelListenableAsyncCloseable.java	(date 1669842880284)
@@ -101,6 +101,7 @@
      *
      * @return a {@link Completable} that notifies when the connection has begun its closing sequence.
      */
+    @Override
     public final Completable onClosing() {
         return fromSource(onClosing);
     }
@@ -162,7 +163,7 @@
         return onClose;
     }
 
-    final Completable onCloseNoOffload() {
+    private Completable onCloseNoOffload() {
         return onCloseNoOffload;
     }

@@ -118,6 +128,11 @@ public boolean isIoThreadSupported() {
return delegate.isIoThreadSupported();
}

@Override
public BooleanSupplier shouldOffloadSupplier() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎖

return Completable.defer(() -> {
maxConcurrencyProcessor.onNext(ZERO_MAX_CONCURRENCY_EVENT);
maxConcurrencyProcessor.onComplete();
return parentContext.closeAsync().shareContextOnSubscribe();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch :)

@Scottmitch
Copy link
Member Author

Scottmitch commented Dec 1, 2022

AbstractNettyIoExecutor

This was intentional as there isn't a reliable leading edge trigger for EventLoopGroup. I will add some state in this class to provide best effort leading edge notification though.

@Scottmitch Scottmitch merged commit 557f530 into apple:main Dec 1, 2022
@Scottmitch Scottmitch deleted the onclosing branch December 1, 2022 02:26
@idelpivnitskiy
Copy link
Member

Thanks!

idelpivnitskiy added a commit to idelpivnitskiy/servicetalk that referenced this pull request Dec 13, 2022
Motivation:

apple#2430 moved `onClosing()` method from internal `NettyConnectionContext`
to public `ListenableAsyncCloseable` interface. While moving a method
upward in the class hierarchy should be a binary compatible change,
let's preserve it for 0.42.x for extra cautious.
idelpivnitskiy added a commit that referenced this pull request Dec 13, 2022
Motivation:

#2430 moved `onClosing()` method from internal `NettyConnectionContext`
to public `ListenableAsyncCloseable` interface. While moving a method
upward in the class hierarchy should be a binary compatible change,
let's preserve it for 0.42.x for extra cautious.
idelpivnitskiy added a commit to idelpivnitskiy/servicetalk that referenced this pull request Dec 23, 2022
…onClosing()

Motivation:

apple#2430 adds `ListenableAsyncCloseable.onClosing()` method. Its implementations
require allocation of a `CompletableProcessor`.
Because we wrap channel's `EventLoop` with `IoExecutor` on every new connection
and every new HTTP/2 stream, it caused significant increase of allocations and
pressure for GC. As the result, latency increased for all HTTP/2 and gRPC
use-cases.

Modifications:

- Add `ExecutionContextUtils` that adapt `EventLoop` to `IoExecutor` API for
`ExecutionContext` and cache its value in `FastThreadLocal` to reduce
allocations;
- Make sure that HTTP/1.x and parent HTTP/2 connections use
`ExecutionContextUtils`;
- For each new HTTP/2 stream (child channel) take execution context from the
parent channel because child channel uses the same `EventLoop` as its parent
channel;
- Add a new `DefaultNettyConnection.initChannel` overload that takes already
constructed `ExecutionContext`;
- Adjust existing tests for new API;

Result:

Significantly reduced allocation rate and less GC pauses for HTTP/2 and gRPC.
idelpivnitskiy added a commit that referenced this pull request Jan 5, 2023
….onClosing()` (#2473)

Motivation:

#2430 adds `ListenableAsyncCloseable.onClosing()` method. Its implementations
require allocation of a `CompletableProcessor`.
Because we wrap channel's `EventLoop` with `IoExecutor` on every new connection
and every new HTTP/2 stream, it caused significant increase of allocations and
pressure for GC. As the result, latency increased for all HTTP/2 and gRPC
use-cases.

Modifications:

- Add `ExecutionContextUtils` that adapt `EventLoop` to `IoExecutor` API for
`ExecutionContext` and cache its value in `FastThreadLocal` to reduce
allocations;
- Make sure that HTTP/1.x and parent HTTP/2 connections use
`ExecutionContextUtils`;
- For each new HTTP/2 stream (child channel) take execution context from the
parent channel because child channel uses the same `EventLoop` as its parent
channel;
- Add a new `DefaultNettyConnection.initChannel` overload that takes already
constructed `ExecutionContext`;
- Adjust existing tests for new API;

Result:

Significantly reduced allocation rate and less GC pauses for HTTP/2 and gRPC.
idelpivnitskiy added a commit that referenced this pull request Jan 24, 2023
….onClosing()` (#2473)

Motivation:

#2430 adds `ListenableAsyncCloseable.onClosing()` method. Its implementations
require allocation of a `CompletableProcessor`.
Because we wrap channel's `EventLoop` with `IoExecutor` on every new connection
and every new HTTP/2 stream, it caused significant increase of allocations and
pressure for GC. As the result, latency increased for all HTTP/2 and gRPC
use-cases.

Modifications:

- Add `ExecutionContextUtils` that adapt `EventLoop` to `IoExecutor` API for
`ExecutionContext` and cache its value in `FastThreadLocal` to reduce
allocations;
- Make sure that HTTP/1.x and parent HTTP/2 connections use
`ExecutionContextUtils`;
- For each new HTTP/2 stream (child channel) take execution context from the
parent channel because child channel uses the same `EventLoop` as its parent
channel;
- Add a new `DefaultNettyConnection.initChannel` overload that takes already
constructed `ExecutionContext`;
- Adjust existing tests for new API;

Result:

Significantly reduced allocation rate and less GC pauses for HTTP/2 and gRPC.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants