-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeout on start #2
Comments
Yup, right now, mainly out of curiosity about how well it might work in various environments1 (and thanks to the wonderful HackerNews API design), it just goes all-in on loading up the stories. I'm likely going to change it up a bit to either, as you suggest, do some form of pagination, or perhaps chunk up the requests and add them to each of the lists in the background. Footnotes
|
You're welcome! Since I'm based in Ireland, I suspect the requests have to travel across the ocean, so it's slower to fetch all the data :) |
Pfft, I'm in Scotland, the packets have to get past you before they get to me! :-P |
@mihaitodor Are you able to check out a branch from this repo and test it for me? |
@davep Of course! Which branch should I test? |
Awesome, I'll ping when I have it up in a wee but (there's zero rush to test of course). I've got an easy-to-plug-in approach I'd like to try out. |
Due to the way the HackerNews API works, getting items is a bit of a pain. There's no "get me the data for all these items please", it's "get me a list of IDs and them make me load every single on individually". To start with I just went all in on loading items up in parallel; which works fine for me but, eh, there are some situations where 500 simultaneous HTTP requests is a *bit* over the top (despite it being 2023! IKR?!?). So, in an initial effort to explore solutions for #2 without leaning into pagination, let's have a play with limiting the number of concurrent connections and see what works.
Okay, there's I'd be interested to see if this makes any difference for you; feel free to muck with the max concurrency value which is currently hard-coded at 50 if it doesn't help, just to see if it does make any difference at some level. If this seems like a reasonable approach I'll expose it as a configuration value, or environment variable, or something; but ideally with a default that is a little more reasonable than the current nuclear option. Yes, this is an attempt to not have to rework the whole design to use paging; at least for tonight. ;-) |
No dice... It choked here on these requests:
LE: I'm running Python 3.11.4 if that helps. Also, I used |
Damn. Thanks for trying. I was hoping that, as you suggested, the main issue was the number of concurrent requests. So it does seem to be a pure timeout issue? Would you mind doing a quick test at your end, around line 76 in |
Adding |
Okay, cool, thanks for that! Assuming when you say you couldn't get it to work on the In that case, for the moment anyway, I'll go ahead with this change. There's also going to be two new values in the config file (which lives in Also... hey, thanks for raising the issue. It's been really useful! Always a joy when someone runs into a problem and flags it up. :-) |
You're welcome!
Yep!
Happy to help! I'm also a daily HN reader and I like this TUI app a lot! I'm thinking to start using it instead of the web-based one, at least for skimming across posts, which is what I usually do, although I also upvote stuff that catches my eye, so it would be nice to also support some basic interactions like that in the future 😅 Not sure if you're aware of other similar sites, but there's also https://lobste.rs, https://slashdot.org and https://tildes.net which I also browse sometimes, but HN is my usual goto place for updates. Would be cool to support some of those as well 😅 |
Oh man, I used to live on /. back in the day. Can't say I know the other two though. I'll take a look. |
v0.1.1 is up on PyPi. 🤞 |
Works as expected, thank you for the quick fix! 🥇
If you want a Lobsters invite, message me on Fosstodon and I'll get you sorted. |
Hey @davep, thanks for taking the time to put this project together! I really like it!
One issue is that, on startup, it tries to make 500+ HTTP calls and, since the httpx default timeout is 5 seconds, some requests end up failing and the UI just shows a rather cryptic "[Errno 8] nodename nor servname provided, or not known" error when that happens. I think this is because it tries to fetch details about all the posts that the API sends instead of just the top 30, but some pagination would be better / faster. I haven't dug enough into the code to see how it's all wired up, but, at the very least, it would help to be able to configure the
AsyncClient
HTTP timeout inoshit/hn/client.py
.The text was updated successfully, but these errors were encountered: