-
Notifications
You must be signed in to change notification settings - Fork 420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a connection pool manager #1189
Conversation
cc @fabianfett |
Motivation: In grpc#1176 we added a per-event-loop connection pool. In order to make a client which uses these pools we need a way to manage them. This PR adds a connection pool manager. Modifications: - Add a pool manager state machine which tracks connection pools and per-pool state such as the number of streams available and reserved for each pool. - Add a pool manager which wraps the state machine. - Add a pool manager state machine tests; the pool manager isn't tested here but will be tested indirectly in a later PR (when a client is added to wrap the pool manager). Result: We can manage connection pools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some nits and room for one discussion.
internal func increaseStreamCapacity(by delta: Int, for pool: ConnectionPool) { | ||
self.lock.withLockVoid { | ||
self.state.increaseStreamCapacity(by: delta, for: pool) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could a streams capacity also be decreased?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, a negative delta
is acceptable here. I'll change this to changeStreamCapacity
.
Sources/GRPC/ConnectionPool/PoolManagerStateMachine+PerPoolState.swift
Outdated
Show resolved
Hide resolved
|
||
// The state machine stores the per-pool state keyed by the pools `EventLoopID` and tells the | ||
// pool manager about which pool to use/operate via the pools index in `self.pools`. | ||
let poolKeys = self.pools.enumerated().map { poolIndex, pool in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A minor defensiveness note: poolIndex
is an index
only if the stored data type is Array
. Otherwise, enumerated
does offsets, not indices. It may be better to rename this, or to provide an alternative implementation that actually does use indices and not offsets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great point; I appreciate the nitty comments like this.
) -> PooledStreamChannel { | ||
let reservation = self.lock.withLock { | ||
self.state.reserveStream( | ||
preferringPoolWithEventLoopID: preferredEventLoop.map { EventLoopID($0) } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we lift this preferredEventLoop.map
call out of the lock? It's nice to do as little as possible while holding that lock.
|
||
switch reservation { | ||
case let .success(poolIndex): | ||
let channel = self.pools[poolIndex.value].makeStream( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.pools
is var
. Presumably we need to hold a lock here.
case .shutdownPools: | ||
// Clear out the pools; we need to shut them down. | ||
let pools = self.pools | ||
self.pools.removeAll() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest using keepingCapacity: true
to avoid Array thinking we want to free memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool,
Motivation:
In #1176 we added a per-event-loop connection pool. In order to make a
client which uses these pools we need a way to manage them. This PR adds
a connection pool manager.
Modifications:
per-pool state such as the number of streams available and reserved
for each pool.
here but will be tested indirectly in a later PR (when a client is
added to wrap the pool manager).
Result:
We can manage connection pools.