-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OAuth2 token expiring during a very long discovery makes sync impossible #6814
Comments
Possible solutions:
All of that is quite some work, and maybe bit difficult to fix in 2.5.1 |
I think 3 is the only approach that's not just a workaround: There could be odd situations where discovery takes a while longer, or other network activity that's broken by expiring tokens. Maybe some server admin makes the tokens expire much more frequently. But as you say it's also error prone. There could even be multiple requests in flight that all come back with a token expired error. Restarting requests isn't too hard (done for redirects already) - but all the queuing and holding would be quite complex. |
@ogoffart @ckamm Why not blindly renew the token every 10 minutes and for each new request use the current latest one? ("viel renew hilft viel") |
@guruz Does indeed sound easiest if both tokens would be valid at the same time for a bit. It'd break if people set expiry to less than the client's renew interval, but that sounds acceptable. |
Refresh will kill the old token. |
@DeepDiver1975 Meh. |
Are you getting back an explicit error which tells you that the token has expired? Should be ... Following the flow in the spec refreshing happens after expiration: https://tools.ietf.org/html/rfc6749#section-1.5 Killing the old access token after refresh is a requirement as per specs if I'm not misstaken |
The server implementation is fine. The problem is the state machine we have client side, it needs to support several call at the same time, and be able to resume arbitrary calls. Currently, the sync engine abort, and let the connection validator renew the token, then the sync restarts, from scratch. As I said in the first comment, we would need to queue the pending call, and re-do call that failled. I'll have a try implementing it, and let's see if we merge it in 2.5 or not. |
There is many pending calls, not only one. Also outside the sync engine you might have things (e.g. selective sync folder requests, activities etc) that you need to fail in a nice way or retry. Maybe a whole new state for each network job? PendingAuthTokenChange? |
My idea was to ass a state in the http credentials. I actually hope that the jobs can be restarted. The redirection code seem to restart them, so i assume it is possible. |
Most request creation already funnels through Some use it though and lots of places call Restarting is indeed already done for redirects: It works if the body isn't sequential (I don't think we ever use that) and if there's no implicit body. Implicit bodies happen for POST requests with url query parameters. Qt will automatically move the url args into the request body, and I didn't find a way to access that body or the url args afterwards. We could work around that by doing it ourselves. |
OAuth2 access token typically only has a token valid for 1 hour. Before this patch, when the token was timing out during the sync, the sync was aborted, and the ConnectionValidator was then requesting a new token, so the sync can be started over. If the discovery takes longer than the oauth2 validity, this means that the sync can never proceed, as it would be always restarted from scratch. With this patch, we try to transparently renew the OAuth2 token and restart the jobs that failed because the access token was invalid. Note that some changes were required in the GETFile job because it handled the error itself and so it was erroring the jobs before its too late. Issue #6814
OAuth2 access token typically only has a token valid for 1 hour. Before this patch, when the token was timing out during the sync, the sync was aborted, and the ConnectionValidator was then requesting a new token, so the sync can be started over. If the discovery takes longer than the oauth2 validity, this means that the sync can never proceed, as it would be always restarted from scratch. With this patch, we try to transparently renew the OAuth2 token and restart the jobs that failed because the access token was invalid. Note that some changes were required in the GETFile job because it handled the error itself and so it was erroring the jobs before its too late. Issue #6814
OAuth2 access token typically only has a token valid for 1 hour. Before this patch, when the token was timing out during the sync, the sync was aborted, and the ConnectionValidator was then requesting a new token, so the sync can be started over. If the discovery takes longer than the oauth2 validity, this means that the sync can never proceed, as it would be always restarted from scratch. With this patch, we try to transparently renew the OAuth2 token and restart the jobs that failed because the access token was invalid. Note that some changes were required in the GETFile job because it handled the error itself and so it was erroring the jobs before its too late. Issue #6814
OAuth2 access token typically only has a token valid for 1 hour. Before this patch, when the token was timing out during the sync, the sync was aborted, and the ConnectionValidator was then requesting a new token, so the sync can be started over. If the discovery takes longer than the oauth2 validity, this means that the sync can never proceed, as it would be always restarted from scratch. With this patch, we try to transparently renew the OAuth2 token and restart the jobs that failed because the access token was invalid. Note that some changes were required in the GETFile job because it handled the error itself and so it was erroring the jobs before its too late. Issue #6814
OAuth2 access token typically only has a token valid for 1 hour. Before this patch, when the token was timing out during the sync, the sync was aborted, and the ConnectionValidator was then requesting a new token, so the sync can be started over. If the discovery takes longer than the oauth2 validity, this means that the sync can never proceed, as it would be always restarted from scratch. With this patch, we try to transparently renew the OAuth2 token and restart the jobs that failed because the access token was invalid. Note that some changes were required in the GETFile job because it handled the error itself and so it was erroring the jobs before its too late. Issue #6814
As reported in #6677 and https://github.com/owncloud/enterprise/issues/2880:
The discovery of huge trees can takes long, and if it takes more than one hour, the oauth2 token expires and the sync aborts.
Next sync will start discovery from scratch again, which will take more than one hour again, and the client will never sync.
Workaround: use selective sync to avoid syncing part of the tree, wait untill this syncs successfully, then enable more directories, little by little.
The text was updated successfully, but these errors were encountered: