You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 18, 2023. It is now read-only.
I used the bug format, but it's not technically a bug, rather slightly incorrect handling of database connections
Describe the bug
A run with ert3 (config specs in "how to reproduce"). The following error hit from the server hosting the database: sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
To Reproduce
Steps to reproduce the behaviour:
Create ert case with 50 realisations and provide a postgres database to ert-storage. The number of input records was about 50 and the number of output records was 4.
Run case and wait for feedback.
Expected behaviour
Expects the run to continue successfully.
I used the bug format, but it's not technically a bug, rather slightly incorrect handling of database connections
Describe the bug
A run with ert3 (config specs in "how to reproduce"). The following error hit from the server hosting the database:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: remaining connection slots are reserved for non-replication superuser connections
To Reproduce
Steps to reproduce the behaviour:
ert-storage
. The number of input records was about 50 and the number of output records was 4.Expected behaviour
Expects the run to continue successfully.
Additional context
For a very brief googling, I found https://stackoverflow.com/questions/11847144/heroku-psql-fatal-remaining-connection-slots-are-reserved-for-non-replication which points to the following answer on a different question: https://stackoverflow.com/questions/10419665/how-does-pgbouncer-help-to-speed-up-django/10420469#10420469
It should also be noted that not all realisations are running at the same time here, as we are only spinning up 3 dask workers at one time.
The text was updated successfully, but these errors were encountered: