You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a PgConnection leaves scope / is dropped, the connection is not closed and stays open. This leads to eventual exhaustion of available Postgres connections. After around 10-15 minutes, the connection is removed. Calling PgConnection.close() works as intended, though.
The documentation for Connection also states that close() is the recommended way, however, it also states that This method is not required for safe and consistent operation., which contradicts the observed behavior a bit, especially when lots of connections are accumulating.
[package]
name = "sqlx_test"version = "0.1.0"edition = "2021"# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
sqlx = { version = "0.6.2", features = ["postgres", "runtime-tokio-native-tls"] }
tokio = { version = "1.24", features = ["full"] }
main.rs:
use tokio::time::sleep;use std::time::Duration;use sqlx::{Connection,PgConnection};asyncfnconnection_count(connection:&mutPgConnection) -> i64{let conn_count :(i64,) = sqlx::query_as("SELECT sum(numbackends) FROM pg_stat_database;").fetch_one(connection).await.unwrap();
conn_count.0}#[tokio::main]asyncfnmain() -> Result<(), sqlx::Error>{staticWAIT_DURATION:Duration = Duration::from_secs(300);letmut cc = PgConnection::connect("postgres://postgres:postgres@localhost/test").await?;{println!("num connections before: {}", connection_count(&mut cc).await);let _second_connection = PgConnection::connect("postgres://postgres:postgres@localhost/test").await?;println!("num connections after: {}", connection_count(&mut cc).await);println!("second_connection leaves scope, waiting for {WAIT_DURATION:?}");}sleep(WAIT_DURATION).await;println!("num connections after second_connection left scope {WAIT_DURATION:?}: {}", connection_count(&mut cc).await);Ok(())}
Info
SQLx version: 0.6.2
SQLx features enabled: postgres, runtime-tokio-native-tls
Database server and version: PostgreSQL 15.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
This is perhaps poorly chosen wording in the documentation, but it is working as intended. Note also that the Connection trait is implemented for a number of different databases which may behave differently, and so the documentation on the method is more advising the implementation to not rely on an explicit close to function properly.
"safe and consistent operation" here primarily means "without data loss", which is ensured by Postgres' ACID guarantees as long as all transactions are committed before drop.
Dropping the PgConnectionwill close the socket, but only on the client side. It doesn't send a Terminate message to the server (as that may require blocking), so it's up to the keepalive timeout on the server to notice the connection is dead and close the server-side process, which is what counts against the connection limit.
Fixing this to always do a graceful close on-drop would require an AsyncDrop abstraction, as blocking in a Drop impl could deadlock the async executor.
Bug Description
When a
PgConnection
leaves scope / is dropped, the connection is not closed and stays open. This leads to eventual exhaustion of available Postgres connections. After around 10-15 minutes, the connection is removed. CallingPgConnection.close()
works as intended, though.The documentation for Connection also states that
close()
is the recommended way, however, it also states thatThis method is not required for safe and consistent operation.
, which contradicts the observed behavior a bit, especially when lots of connections are accumulating.#2249 and #1928 might be related.
Minimal Reproduction
Cargo.toml:
main.rs:
Info
PostgreSQL 15.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
rustc --version
:rustc 1.66.1 (90743e729 2023-01-10)
The text was updated successfully, but these errors were encountered: