-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection reset on heavy load #2241
Comments
Update: But not on an Arch Linux (kernel 5.4) We tried to reload workers with --max-request and the problem is still there. We'll try with an example Django app and I'll update the post. UPDATE: I can reproduce the issue with a simple django + django-rest-framework exemple app: |
We had the same issue, but with nginx in front of gunicorn. The solution was to set the |
I have the same issue, on the over 10000+ requests, sometimes raise ConnectionResetError(10054)
|
I fixed it. |
recently, it just less error, Can't totally fixed it |
no activity since awhile. closing feel free to create a new ticket if needed. |
On heavy load from a single source, gunicorn is resetting some TCP connections without any log.
It only appends when I query the server from an external computer and does not append on when requests come from localhost. I tried with a server at OVH.com and on LAN with 2 computers.
Environment
Python 3.8.0
gunicorn 20.0.4
Django 2.2.9
Ubuntu Server 18.04.3 LTS
The gunicorn command is
Reproducing
I'm generating thousands of requests (GET or POST) and sending 300 of them at a time. When a request responds, I send a new one.
After some thousand of requests (~2000 to ~10000), I have at least one Connection Reset.
I tried with GET requests responding a 300KB JSON, POST requests doing a dummy for-loop then responding a 204 and POST request doing some DB stuff and responding a small JSON.
There is no error in the log, we only see successful HTTP requests.
We ran a tcpdump and we see some tcp RST but we can't find where it comes from:
We tried to run the app in a limited CPU docker container to exclude networking failures due to high CPU usage and we have the same problem.
The error is the same through an Nginx reverse proxy.
Do you have any ideas about how can I have more information about what is happening?
The text was updated successfully, but these errors were encountered: