Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add idle check to connection cleaner and rework when it runs #60

Closed
wants to merge 2 commits into from

Conversation

Sytten
Copy link
Collaborator

@Sytten Sytten commented Jun 23, 2021

Fixes #58

So this is somewhat of a big change:

  • The connection cleaner now checks for the idle lifetime (time since last used). The current lazy only technique is not sufficient for production systems that might receive a spike in demand but want to avoid holding useless connections when the demand is reduced.
  • The connection cleaner is now started on pool launch if either the max idle lifetime or max lifetime is set. Previously it was only started when it was changed with the set method which doesn't really make sense.
  • Added a method on the pool to dynamically change the max idle lifetime
  • The cleaner now lives much longer and only stops if both max lifetime are none. Previously it would stop if the number of open connections was 0 and could only restart if the lifetime was changed. This would mean that it would stop on reduced demand (or no demand) but would not start again later on.

I added tests to reflect the new behaviour but let me know if you want more.

if let Some(lifetime) = max_idle_lifetime {
match internals.config.max_idle_lifetime {
Some(prev) if lifetime < prev && internals.cleaner_ch.is_some() => {
// FIXME
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you had in mind for the result here @importcjj
Returning a result would break the API for set_conn_max_lifetime

@Sytten
Copy link
Collaborator Author

Sytten commented Jun 23, 2021

Hum there seems be a race condition (most likely unrelated to the current changes) in https://travis-ci.com/github/importcjj/mobc/jobs/518297137#L1515
This would be consistent with the report we got from a user. I will investigate.

@Sytten
Copy link
Collaborator Author

Sytten commented Jun 23, 2021

There should probably be a way to not lock the whole internal for the connection cleaner to run since that can take time, ideas are appreciated.

@Amirault
Copy link

Amirault commented Jul 1, 2021

There should probably be a way to not lock the whole internal for the connection cleaner to run since that can take time, ideas are appreciated.

Great PR, when will it be available ?

@Sytten
Copy link
Collaborator Author

Sytten commented Jul 1, 2021

I mean it all depends on @importcjj
I still feel the reaper is locking the pool for too long.

@importcjj
Copy link
Owner

Sorry for late, I will check it soon.

@importcjj
Copy link
Owner

importcjj commented Jul 4, 2021

I mean it all depends on @importcjj
I still feel the reaper is locking the pool for too long.

I think it is necessary to limit the execution time and the number of connections traversed when cleaning connections.

For example:

  • Only check the first N connections.
  • When reaching the time limit, just return.

@Sytten
Copy link
Collaborator Author

Sytten commented Jul 4, 2021

We could do two things

  1. Skip if there are requests for connection
  2. Release the lock for the drop and only reacquire if we need to put back a connection in the pool
    Checking the elapsed time should be fairly fast even if you have many connections. I fear if we dont check them all we will miss some regularly.

Also I checked the sqlx pool and they managed to removed the lock by using a lock free vector like structure. Might be worth a eventually.

@importcjj
Copy link
Owner

Actually, I have long wanted to rebuild this connection pool. I want to remove all the locks and use a single-thread events loop just like Redis. But I doubt how many improvements the new architecture can give, it needs to be tested.

@Amirault
Copy link

Amirault commented Jul 7, 2021

Actually, I have long wanted to rebuild this connection pool. I want to remove all the locks and use a single-thread events loop just like Redis. But I doubt how many improvements the new architecture can give, it needs to be tested.

Hello is connection pool subject an already closed subject in the tech world ? I seems to be weird to reinvent the wheel no ?
What your point about that ?

@importcjj
Copy link
Owner

If you can bring some things new, it's worth doing.

@Amirault
Copy link

If you can bring some things new, it's worth doing.

If you can bring some things new, it's worth doing.

Nope, but waiting for it :P

@Sytten
Copy link
Collaborator Author

Sytten commented Jul 12, 2021

Unsure what are the next steps here, if we merge this PR it will behave better in general but will lock the mutex for a long time.
Should I refactor the closing of connections to not require a lock?

@Amirault
Copy link

If you can bring some things new, it's worth doing.

Have you take a look here : https://github.com/vincit/tarn.js ? (what do you think about this one ?)

@importcjj
Copy link
Owner

o not require a lock?

Unsure what are the next steps here, if we merge this PR it will behave better in general but will lock the mutex for a long time.
Should I refactor the closing of connections to not require a lock?

I wrote a simple prototype using the event loop, and with a simple test, I think it has a much better performance than the current one.

The pseudo-code below is the core code of the prototype:

// ## pool backend event loop

select {
    req    = conn_requests_channel.next() => { 
        // 1. register into the pendings request list;
        // 2. check and notify the conn spawner to create a new connection if necessary.

    }
    conn = conns_new_or_recycle.next() => {
        // 1. put the connection into the idle conns list;
        // 2. pop and respond pending requests;
    }
    event = events.next() => {
        // 1. change configs
        // 2. update stats
        // 3. etc
    }
}

// ## conns spawner loop

for _ in new_conn_reqs {
    // 1. make new connection
    // 2. send the connection to the channel conns_new_or_recycle
}

// ## pool frontend 

async fn get() -> conn {
    // make a get request and send it to the conn_requests_channel.
    // try to get the conn from a respone channel.
    // do health check
}

https://github.com/importcjj/mobc2-prototype/blob/c92bdbb6df894ee607382d28bc4a782f1effe67b/src/lib.rs#L123

@Sytten
Copy link
Collaborator Author

Sytten commented Jul 14, 2021

Interesting, the challenge will be to ensure the loop is never blocked for long.
SQLx has gone another direction, keeping the multi-thread but using https://github.com/crossbeam-rs/crossbeam to store the connections.
In V2 I would split the code of lib.rs in multiple files, it is very hard to read currently.
Should we start a discord channel or something? I would like to participate in writing it if you want

@importcjj
Copy link
Owner

importcjj commented Jul 15, 2021

Interesting, the challenge will be to ensure the loop is never blocked for long.
SQLx has gone another direction, keeping the multi-thread but using https://github.com/crossbeam-rs/crossbeam to store the connections.
In V2 I would split the code of lib.rs in multiple files, it is very hard to read currently.
Should we start a discord channel or something? I would like to participate in writing it if you want

I have added you to the collaborators of this repo, you can submit your code to the branch 'v2-dev'.

@jkomyno
Copy link

jkomyno commented Jul 17, 2023

Hi, it's been a while, I wonder what's missing from this PR?

@Sytten
Copy link
Collaborator Author

Sytten commented Jul 17, 2023

@jkomyno Its been a very long time since I have been out of prisma / mobc and the code I wrote at the time probably was not the best. I am unsure of the state of mobc but at the time it was agreed that it needed a full rewrite and I suggested prisma looked at https://github.com/bikeshedder/deadpool as I worked with the dev to make sure it had all the requirements needed and a better code base. I see some else suggested that to you more recently prisma/prisma#19215.

@garrensmith
Copy link
Collaborator

@Sytten and @jkomyno with the changes we have made to mobc I don't think these changes are needed.
Mobc behaves very similar to deadpool and so I don't think changing to deadpool will lead to any improvements.

@garrensmith
Copy link
Collaborator

See my above comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Pool should check for max idle lifetime in cleaner
5 participants