-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SQLite Code 5 Database Is Locked #451
Comments
It's been a while, but I believe you should configure the sqlite database with
|
So from trying this just now, it definitely works with both. Just busy_timeout gives slower errors (i.e. it waits for the timeout before erroring out, even if I set it to something like 60 seconds). Just journal_mode=WAL means all the fetches for the data that should be None return None, but every second write fails. But both work! Thank you! |
Okay, maybe I didn't test that solution enough. My pragmas for the test were
and the modified code is
with the request function similar. I updated the git repo with the test code I used. |
This also works with the old packet size of 4096, and throws more errors in that case:
|
So your data rate may just exceed what can be written to disk, in that case lock timeouts will always happen with multiple threads. You could add back pressure if the timeouts start happening, or just reduce the load to well below what you need. If you DB is on a hard drive instead of an SSD switching to a SSD will drastically raise the amount of data needed to overload it. If you're on MacOS the full fsync's sqlite needs to enforce durability are (or at least were) extra expensive. Since there can be only a single writer in SQLite I would just send all write requests to a single writer thread/task via a channel with a size limit, then the inserts will get blocked automatically when the channel gets full. With WAL only writers can block each other, so there should be no more busy errors. Also there are tricks to speed up the writes. Make sure any active write transactions are minimally ideal. Batching into fewer bigger statements/transactions (mixes well with a single writer task). But it'll still get busy if it's over loaded. If you don't need persistence use an in memory DB. There's a possibility some bug in the Rust libraries is causing a write transaction to get stuck on an await point. That would be harder to prove. |
From my experience, Ideally you should use two pools. One for writers that has a max size of 1 and one for readers. SQLx should probably provide this kind of abstraction as well. |
Wait that would work? I mean, opening two pools on the same file? |
Apparently it does. It would be great if SQLx would provide this abstraction. Maybe even something as simple as splitting off one connection for writing from the pool might work. So like marking one of the connections as being able to write and the others only as being able to read? |
This would remove the potential to lock and allow a much higher performance than just using 1 pool with a max-size of 1. Opened #459 |
After writing #326 (comment) I tried to build a sort of minimum shareable example that would reproduce the problem.
The problem itself is that when I try to simultaneously read and write to the database, I don't just get varying replies, but I actually get the result, that the database is locked.
I would expect the database to just, well, wait a bit longer if it is currently locked (maybe something I could check for with the timeout function from tokio), but I just get an SQLite error.
I'm currently using sqlx straight from the master branch:
https://github.com/apexys/sqlx_locked_mvp/blob/master/Cargo.toml
In the database code, I create a pool, a table and an index
https://github.com/apexys/sqlx_locked_mvp/blob/master/src/main.rs
and then start two tokio processes each for inserting and requesting data
the inserts are just random data under a string key
and the requests are basically the same
Naively, I would expect this code to not run into an error. The database is brand new, and even if no request ever returns anything but none I would be happy.
Instead, my log looks like this:
Am I doing something wrong here?
Thanks for your help!
Quick edit: I just noticed that I get a result for packets 1-3 before I even inserted them instead of none. This is probably just the logging being late, right?
The text was updated successfully, but these errors were encountered: