-
-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'NoneType' object has no attribute 'resume_reading' #1031
Comments
with this error I have no confidence the pool hasn't been polluted, can someone comment as to how bad this is? |
Hi @thehesiod In general what helps a lot in this kind of issue is a stand-alone reproduction snippet, with a corresponding well-known server setup. I see the nasa.gov endpoint you're targeting is (obviously) protected behind auth, so I can't reproduce, and besides we don't know what kind of server setup that endpoint is running (the bpo ticket mentions possibly "slow SSL" but I'm not sure what that means in practice). Are you able to reproduce the bug by running the reproduction snippet in https://bugs.python.org/issue36098, stripping the server code and running the client with |
access is actually free: https://disc.gsfc.nasa.gov/data-access. However I was hoping you guys would have a work-around as libraries should try avoiding breaking catastrophically :). ex: internally catching this, throwing away the connector and raising some sort of broken pool exception or something. I'm pretty sure it's triggered by this underlying asyncio issue |
Please be aware of your tone. You're admitting it's an underlying asyncio issue. Your use case is very specific and triggered after tens of thousands of requests, yet you're passive-aggressively accusing the authors of not forseeing it or having a work around for it. |
I'm not sure where you're inferring all that. I'm a friendly guy and just asking if this library can handle an issue it could not have foreseen or its fault. I'm not saying there's anything wrong with httpx, the issue is most likely the underlying asyncio module and there's no way this library could have foreseen that. Here's all what I'm asking: Is there a way for httpx to be able to handle this asyncio issue so that it will be in a "good" state if asyncio throws a grenade at us (pool still valid and bad connector dropped)...maybe this is already true however as a client I have no way to know this since an assert is throw from the bowels of asyncio. This is why I was implying that it would be nice if this were caught and handled. As a client I can't use this library without some sort of behavior like that as I won't know if the error is coming from httpx or some other library and if it's ok to ignore. This is a nice to have basically, would allow me to use this library until an asyncio fix comes out (if ever). Any use case can be considered "very specific". I don't think that's a way to approach any issue. |
@thehesiod I'm joining Yeray's comment here: reading your comments I'm under the impression that you're expecting us to fix things for you. I don't think that's what you're meaning to convey, but any case, that's now how I envision open source to work. To answer your question - I personally cannot guarantee that just catching the In the meantime I'm going to close this as an "external bug" in asyncio. Thanks all! |
reproduced here: https://repl.it/repls/PristineBonyBugs#main.py. what's funny is it doesn't reproduce locally, but very easily on this site |
it just uses httpx + core asyncio server |
another datapoint, this does not seem to reproduce with aiohttp: https://repl.it/@thehesiod/PristineBonyBugs-1#main.py. so there's something specific about how httpx interacts with the underlying asyncio subsystem |
updated testcase to not require gather and now it happens immediately on that site and locally |
so investigating this some, it seems like httpx is doing an extra read after the connection is closed, whereas aiohttp does not. So this may in fact be an httpx bug. Ideally the |
So boiling this down I think there are two actionables:
I'm afraid of 2) potentially causing data corruption. In my particular case I think I can do some ETag validation. I'll report back if two doesn't work. But need helper with 1) |
Checklist
master
.Describe the bug
I recently tried switching to httpx because of instability with aiohttp, however after a few tens of thousands of requests against URLs like
https://hydro1.gesdisc.eosdis.nasa.gov/data/NLDAS/NLDAS_FORA0125_H.002/2020/172/NLDAS_FORA0125_H.A20200620.1000.002.grb
with query string:{'If-Modified-Since': 'Mon, 17 Aug 2009 16:20:16 GMT'}
with 10 workers I got an error stack like this:It seems similar to what's reported here: https://bugs.python.org/issue36098. so may be an underlying asyncio bug.
To reproduce
Reproduction steps are like above, it takes many, many many requests until it happens. Please let me know how I can help, however I think there's enough information here that a testcase shouldn't be needed.
Expected behavior
Should never trigger an attribute error
Actual behavior
Attribute error happens after 10s of thousands of requests
Debugging material
see above
Environment
Additional context
The text was updated successfully, but these errors were encountered: