Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): support for monitor command. This include some code #412

Closed
wants to merge 1 commit into from

Conversation

boazsade
Copy link
Contributor

refactor for the request message to dispatch async. (#344)

Signed-off-by: Boaz Sade [email protected]

Supporting monitor command
This is still WIP.
Need to add support for ADMIN level commands
Need to correct the formatting of the output.

  • Monitor command is handled.
  • Refactor the the async message dispatch to support different types of messages.

Copy link
Collaborator

@romange romange left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked how you structured code but I think it currently has memory leaks.

also, please remove "This include some code" from the commit title :)

bool replica_conn : 1;
bool authenticated : 1;
bool force_dispatch : 1; // whether we should route all requests to the dispatch fiber.
bool monitor : 1 = false; // when the connection is monitor do not support most commands other
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i feel uncomfortable to initialize bits here, also it's not consistent with the rest of the fields.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

owner()->SendMonitorMsg(msg, borrows);
}

void ConnectionContext::MonitorCmd() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the right place to implement MonitorCmd is inside the main service.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved

src/server/conn_context.cc Outdated Show resolved Hide resolved
src/facade/dragonfly_connection.cc Outdated Show resolved Hide resolved
src/facade/dragonfly_connection.cc Outdated Show resolved Hide resolved
// Used as custom deleter for Request object
struct Connection::RequestDeleter {
void operator()(Request* req) const {
mi_free(req);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i like the idea of custom delete but i am pretty sure you have a memory leak here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked for memory leaks. I didn't find any. any pointer that was allocated, was released as well (or at least the call to released it was called, if mi_free don't really free it, then we would have a memory leak here), but this is just the same as the old code. There is a single allocation of memory to the req itself. I can add an explicit call to dtor, but its trivial.

Copy link
Collaborator

@romange romange Oct 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not trivial at all. it has non-pod members that by themselves take memory. for example storage member is leaked for requests that take more than 120 bytes. If I am correct, it could be easily proven by sending a pipelined traffic to the server.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is why I added the explicit call to the dtor in the code. So that if we have some object with none trivial dtor it would be called. Right now the code does the same as it did before, only we don't need to call it explicitly since it is done in the deleter of the unique ptr

src/facade/dragonfly_connection.cc Outdated Show resolved Hide resolved
@@ -116,11 +125,16 @@ class Connection : public util::Connection {
std::unique_ptr<ConnectionContext> cc_;

struct Request;
struct DispatchOperations;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I missed anything, DispatchOperations and DispatchCleanup could reside in cc file and do not have to be part of the connection class.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need them there because they require access to the private part of the enclosing class. I tried otherwise and failed to compile it. They are not really part of the interface since you cannot use them from the outside and they are in the private part of the class.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, i missed that

@boazsade boazsade force-pushed the support-monitor-command branch from 6b75e5d to b15b564 Compare October 20, 2022 14:34
@@ -19,6 +19,10 @@
typedef struct ssl_ctx_st SSL_CTX;
typedef struct mi_heap_s mi_heap_t;

namespace dfly {
class ConnectionState;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this creates a circular linking dependency.
The way the code is designed, "server" is dependent on "core" and "facade". "facade" can not be dependent on the server code. I bet the "ok_backend" target does not build anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I need to make sure that I do not use server from inside facade, I will make the required changes for that

@boazsade boazsade force-pushed the support-monitor-command branch from b15b564 to 96f90ec Compare October 21, 2022 05:39
@boazsade boazsade marked this pull request as ready for review October 21, 2022 05:40
@@ -263,21 +391,30 @@ void Connection::RegisterOnBreak(BreakerCb breaker_cb) {
breaker_cb_ = breaker_cb;
}

void Connection::SendMsgVecAsync(const PubMessage& pub_msg, fibers_ext::BlockingCounter bc) {
void Connection::SendMonitorMsg(std::string_view monitor_msg,
util::fibers_ext::BlockingCounter bc) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you pass "bc" all the way from MonitorCmd to here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same way we are doing it in the pub/sub support. This sync all the used connection to the one location where it is used. Maybe we don't need this in this case? let talk about this directly?

…now support monitor type as well. (dragonflydb#344)

feat(server): monitor command, add command filters

feat(server): support for monitor command. spport for auth not showing sensitive data (dragonflydb#344)

feat(server): monitor command output format support none pritable char as well

feat(server): monitor command - move monitor handling after verify and run other command, add cleanup when connection closed (dragonflydb#344)

feat(server): monitor command - disable by default (dragonflydb#344)

Signed-off-by: Boaz Sade <[email protected]>
@boazsade boazsade force-pushed the support-monitor-command branch from 96f90ec to 0e7e837 Compare October 21, 2022 09:30
@boazsade boazsade closed this Oct 23, 2022
@boazsade
Copy link
Contributor Author

I would create two PRs for this. One about the refactor to the connection in regard to the message dispatching. The second about the monitor command.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants