-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too many requests - 429 => special RetryOptionsBase which wait for the time in Retry-After-Header #59
Comments
I have the same problem. Have you solved it now? |
Hi, not really. The main problem was that the server side application (not our under control) used the 429, but give us the wrong waiting time in the retry-after-header. Not sure how this should work, but the server had a strong rate limit of 5 parallel requestes and if we send 60 in parallel we got 5 successful and for 55 we got a 429 with a retry after header of 1 seconds. So also in the next retry we still get only 5 valids and the other 50 goes in the next retry -> the server does not tell us remaining calls so that we can optimize the requests and the values provided in retry-after-header or not correct. So we decided to go with a ListRetry. Furthermore for this issue with issue 61 we subclassed RetryClient and _RequestContext. In this we only changed that we made a DeepCopy of the Formdata. We didn't put this for now as a pull request, because in Aiohttp it was explizitly implemented that this cause an exception (aio-libs/aiohttp#4351) So not sure what is the best way to solve it correctly - this one now works for us. Except that we didn't get the "parallel" calls fast enough and are stressing the server with more retries than needed due to missing values in the response. Additionally we changed also the Retry behaviour on 500er codes. The server gives this too often and normally a retry does not fix anything -> so 500 is more a general exception in our case and should not be retry. Except if there are network issues with a EOF -> in this case we want to retry it. Another topic on which we currently fight: Our server is using OpenID Connect and this means that after a specific time the access_token expires and needs to be refreshed. So normally we need to check the validity of the token before a request is send. Hope this helps a bit. |
Now you can use response as a parameter to your RetryOptions. Please, look at this commit So you probably can create a function like that: def get_timeout(self, attempt: int, response: Optional[ClientResponse] = None) -> float:
if response is None:
return 5.0
return response.header['Retry-After'] This should fix your problem |
Hi there, @inyutin. I consider you should at least bump the minor version ( |
@Rongronggg9 thank you! |
The new release is fine. Should v2.5.6 be yanked from PyPI? It may help prevent loose versions from selecting it. |
@Rongronggg9 I would like not to do that. In my experience if people want to update a dependency - they update it to the last possible version |
Nope. You might misunderstand me. I mean, if someone does NOT frequently upgrade their dependencies, and a loose version has been used for a long time (e.g. If someone uses a loose version for their dependencies, they, less or more, show their belief that upstream developers won't introduce breaking change into patch versions. What's more, even if a downstream developer upgrades their dependencies, the old versions of their packages are, obviously, unaffected. That's another big problem. |
@Rongronggg9 you are right, this problem may exists. However I'm still not willing to delete the version. I'm afraid someone may already have adopted 2.5.6 and by deleting I will do things much worse. Thank you for pointing our to such an important problem. I hope I will not make a same mistake in the future. But I'm really not sure that deleting the version now is a good choice. |
Yanking a version equals NOT deleting a version. Actually, PyPI prompts you not to delete a version but to yank it when you try deleting a version. By yanking a version, pip will choose it only if a strict version (e.g. On the other hand, you may unyank a yanked version according to PEP 592, so, yeah, if you are still concerned about some cases, you may just give try. :) |
@Rongronggg9 done, thank you |
Hi,
one of our current server limit the API with 429 and send the waiting time in the retry-after header (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After)
WIth python requests module and the serial work with retry strategy it was easy to wait exactly how long the server tells us in the header and retry again.
With async I have no idea how to implement the same strategy, because the waiting time I get as a response is only applicable for this request.
Do you have any idea how to maximize the parallel calls without waiting too long?
Thx in advance
The text was updated successfully, but these errors were encountered: