-
Notifications
You must be signed in to change notification settings - Fork 915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
goroutines and tcp errors #723
Comments
When it fails, what is in the PostgreSQL (server) logs? |
Let me know if you want me to enable more detailed logging or enable another logging option, this is what I see in the log around the time of a pipe breaking:
|
Hi, I am also facing the same issue. Random broken pipe errors. Is there any work around for this. |
@dharmjit See #871 (comment) for workarounds. |
roylee17
added a commit
to roylee17/sqlx
that referenced
this issue
Mar 21, 2021
I'm seeing "broken pipe" errors when working with CRDB using sqlx. The issue seemed to be the tcp connections were diconnected while the conns in db driver (pq) still has stale connection. It happens more often when the DB is behind a proxy. In our cases, the pods were proxied by the envoy sidecar. There were other instances on the community reporting similar issues, and took different workaround by sebding perodic dummy queries in app mimicing keepalive, enlenghthen proxy idle timeout, or shortening the lifetime of db conn. This has been reported and fixed by the lib/pq upstream in v1.9+ lib/pq#1013 lib/pq#723 lib/pq#897 lib/pq#870 grafana/grafana#29957
bonzofenix
pushed a commit
to cloudfoundry/app-autoscaler
that referenced
this issue
Sep 24, 2021
To consume fix for lib/pq#723 as it has the same symptom as issues we have seen on Azure DB for PostgreSQL.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Full disclosure I may have something misconfigured with postgres.
max_connections on db = 1000
My understanding is that once you get the db object from the db.Open() that it is a connection pool, and it can be safely used inside goroutines for concurrent access.
However the below minimal example does one of 3 things on my machine.
Lowering the goroutines int variable increases the reliability, raising it makes failure a guarantee.
The number of records that prints to the terminal before an error changes each time I run a given binary.
The text was updated successfully, but these errors were encountered: