-
Notifications
You must be signed in to change notification settings - Fork 999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(server): Support limiting the number of open connections. #1670
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -202,6 +202,14 @@ bool Listener::AwaitDispatches(absl::Duration timeout, | |
return false; | ||
} | ||
|
||
bool Listener::IsAdminInterface() const { | ||
return is_admin_; | ||
} | ||
|
||
void Listener::SetAdminInterface(bool is_admin) { | ||
is_admin_ = is_admin; | ||
} | ||
|
||
void Listener::PreShutdown() { | ||
// Iterate on all connections and allow them to finish their commands for | ||
// a short period. | ||
|
@@ -267,6 +275,10 @@ void Listener::OnConnectionClose(util::Connection* conn) { | |
} | ||
} | ||
|
||
void Listener::OnMaxConnectionsReached(util::FiberSocketBase* sock) { | ||
royjacobson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
sock->Write(io::Buffer("-ERR max number of clients reached\r\n")); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How will this be handled by the client? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The clients I tested (python, redis-cli) seemed to handle it fine. redis does behave a bit differently (the socket doesn't close immediately and the message is delayed until the first command). But then I tried to DDoS redis by opening a bunch of connections and leaving them open, and apparently it will start doing what we do. and yeah, the framework is closing the connection :) |
||
} | ||
|
||
// We can limit number of threads handling dragonfly connections. | ||
ProactorBase* Listener::PickConnectionProactor(util::FiberSocketBase* sock) { | ||
util::ProactorPool* pp = pool(); | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -35,12 +35,16 @@ class Listener : public util::ListenerInterface { | |
bool AwaitDispatches(absl::Duration timeout, | ||
const std::function<bool(util::Connection*)>& filter); | ||
|
||
bool IsAdminInterface() const; | ||
void SetAdminInterface(bool is_admin = true); | ||
Comment on lines
+38
to
+39
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We already have Connection::IsAdmin(), could we maybe consolidate them to use the same logic? (either inspect the port or explicitly set like here) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oooo, I did NOT expect the implementation of |
||
|
||
private: | ||
util::Connection* NewConnection(ProactorBase* proactor) final; | ||
ProactorBase* PickConnectionProactor(util::FiberSocketBase* sock) final; | ||
|
||
void OnConnectionStart(util::Connection* conn) final; | ||
void OnConnectionClose(util::Connection* conn) final; | ||
void OnMaxConnectionsReached(util::FiberSocketBase* sock) final; | ||
void PreAcceptLoop(ProactorBase* pb) final; | ||
|
||
void PreShutdown() final; | ||
|
@@ -58,6 +62,8 @@ class Listener : public util::ListenerInterface { | |
|
||
std::atomic_uint32_t next_id_{0}; | ||
|
||
bool is_admin_ = false; | ||
|
||
uint32_t conn_cnt_{0}; | ||
uint32_t min_cnt_thread_id_{0}; | ||
int32_t min_cnt_{0}; | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
import pytest | ||
import redis | ||
from redis.asyncio import Redis as RedisClient | ||
from .utility import * | ||
from . import DflyStartException | ||
|
||
|
||
async def test_maxclients(df_factory): | ||
# Needs some authentication | ||
server = df_factory.create(port=1111, maxclients=1, admin_port=1112) | ||
server.start() | ||
|
||
async with server.client() as client1: | ||
assert [b"maxclients", b"1"] == await client1.execute_command("CONFIG GET maxclients") | ||
|
||
with pytest.raises(redis.exceptions.ConnectionError): | ||
async with server.client() as client2: | ||
await client2.get("test") | ||
|
||
# Check that admin connections are not limited. | ||
async with RedisClient(port=server.admin_port) as admin_client: | ||
await admin_client.get("test") | ||
|
||
await client1.execute_command("CONFIG SET maxclients 3") | ||
royjacobson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
assert [b"maxclients", b"3"] == await client1.execute_command("CONFIG GET maxclients") | ||
async with server.client() as client2: | ||
await client2.get("test") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add
IsAdminInterface()
API toListenerInterface
?I realize it's perhaps not clean, but having a downcast is messy and can lead to issues down the road
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IsAdminInterface
is dragonfly specific. On the other side, facade::Connection is always issued by facade::Listener.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm open to any suggestions of removing hardcoded downcasts :)
We could even template-ize Listener to have facade Listener return its own connection
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My personal opinion is that we should avoid casts if there is a reasonable design choice that allows it. And if not - we can have them and not feel bad about it. Specifically, any virtual inheritance implies the possibility of downcasts and it's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the cast either, if we start doing it too much I think we should find out a better way, but for now this was the shortest way I found to do this :)