You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Processing requests on large repos consumes a fair bit of memory (this isn't a gitea issue, that is due to git). Normally this isn't an issue unless we end up processing requests that take a very long time (see gist for examples). When this happens we can pile up very large (in memory) git processes as we handle these requests. If we get enough of them together we can OOM potentially killing gitea.
It seems that it might be advantageous to timeout these requests and return an error to a user after some reasonable amount of time to avoid memory pressure issues. Better to error gracefully than crash.
To this end I've been poking at adding http request timeouts to gitea here: cboylan@d11d4da
I have a few questions about this. Is there a better way to handle this with gitea? Something might already exist? If not will timing out the requests kill the backend git processes? It seems that an exception is raised when data is written to a request that has been timed out and I'm not sure we are going to write until git is done doing what ever it is doing.
Mostly opening this issue to start a conversation and point at some code that might illustrate things a bit (even though it is probably terrible code on my part). Please let me know what you think and whether or not this should be pursued further.
Thank you for these pointers. It does indeed appear that there is a default git timeout of 60 seconds on all git commands that are executed (certain commands override this default with longer timeouts but nothing longer than 10 minutes).
It appears I will need to look elsewhere for the cause of these multi hour requests. Any suggestions on how one might go about getting further profiling data?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs during the next 2 weeks. Thank you for your contributions.
[x]
):Not sure how I would do that.
Description
Processing requests on large repos consumes a fair bit of memory (this isn't a gitea issue, that is due to git). Normally this isn't an issue unless we end up processing requests that take a very long time (see gist for examples). When this happens we can pile up very large (in memory) git processes as we handle these requests. If we get enough of them together we can OOM potentially killing gitea.
It seems that it might be advantageous to timeout these requests and return an error to a user after some reasonable amount of time to avoid memory pressure issues. Better to error gracefully than crash.
To this end I've been poking at adding http request timeouts to gitea here: cboylan@d11d4da
I have a few questions about this. Is there a better way to handle this with gitea? Something might already exist? If not will timing out the requests kill the backend git processes? It seems that an exception is raised when data is written to a request that has been timed out and I'm not sure we are going to write until git is done doing what ever it is doing.
Mostly opening this issue to start a conversation and point at some code that might illustrate things a bit (even though it is probably terrible code on my part). Please let me know what you think and whether or not this should be pursued further.
This might also be related to #491
The text was updated successfully, but these errors were encountered: