Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace r2d2 with tokio-rusqlite #4052

Closed
link2xt opened this issue Feb 17, 2023 · 3 comments · Fixed by #4053
Closed

Replace r2d2 with tokio-rusqlite #4052

link2xt opened this issue Feb 17, 2023 · 3 comments · Fixed by #4053
Assignees

Comments

@link2xt
Copy link
Collaborator

link2xt commented Feb 17, 2023

I discovered tokio-rusqlite via the comment in rusqlite repository. It does what @flub proposed at #4050 (review).

This will solve r2d2 problems we have with closing database connection, causing #4049, 4051 and random python tests import/export failures (#4046).

@link2xt
Copy link
Collaborator Author

link2xt commented Feb 18, 2023

tokio-rusqlite cannot easily replace the connection pool approach used in r2d2 and #4053, because it will require changing the API of Sql.

Connection pool approach uses the fact that rusqlite::Connection is Send + !Sync and moves the ownership of the Connection to the place where it is used with the promise that when PooledConnection wrapper is dropped, the Connection is returned back to the pool.

tokio-rusqlite keeps a single Connection in a worker thread, so the Connection is never sent anywhere and could as well be !Send + !Sync. But to use the Connection you need to construct sendable closures FnOnce(&mut rusqlite::Connection) -> R + Send + 'static which means it needs to own all the query parameters. If you have a &str and want to use it in a query, you have to convert it to String to move it inside the closure and send to the worker thread.

With the "worker thread" approach it is even possible to distribute the work across several connections so long requests like selecting a whole chat do not stall short requests like loading the group name. However, Sql.execute(), Sql.insert() etc. APIs will have to change because the params argument is not Send and may contain references to actual query parameters.

If we want to implement async API for Sql, we can implement call() method similar to the one that tokio-rusqlite connection provides and gradually port existing code to use it instead of fake-async execute/insert/count/query_map methods, then replace the underlying implementation with worker threads.

@flub
Copy link
Member

flub commented Feb 19, 2023

Since I wrote that comment I'm not sure it's an entirely correct statement, I think we might benefit from multiple connections to some degree. Since we can have several connections reading at the same time. I'm not really sure how common it is for the UI's to be calling core APIs concurrently, but they could.

Your description here of tokio-rusqlite does not make it sound like it will be necessarily better than the pool you've implemented in #4053. To me it seems more like something that would need to be tested and measured. Are you convinced it's worth trying this direction further?

@link2xt
Copy link
Collaborator Author

link2xt commented Feb 19, 2023

I would still like to merge #4055 that removes get_conn() API and makes all users use call() and transaction() APIs.

Ideally call should send futures to a multi-consumer queue with multiple worker threads because block_in_place is a hack, but I am indeed not sure it actually improves performance compared to the current approach and worker pool implementation probably requires some unsafe code because the compiler cannot check that the worker thread will not execute the future passed to the .call() past its lifetime. If someone later implements a "worker pool" crate that can execute futures with non-'static scope, we can switch to it easily.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants