-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have to go through all the entries to find out how many there are #185
Comments
replaced by #185 (comment), since it's less rambly |
Use casesThere are a lot of ways to aggregate the stuff to be counted. Allowing everything would complicate things a lot, so it makes sense to start from some known use cases: In order of decreasing usefulness:
PerformanceAll of the above can be done by having new The "for each" use cases would require calling them many times, which may get inefficient (n+1 query problem). Initially, we can just accept this, since for the SQLite storage it should be OK. If we ever get a storage implemention where it matters more, we can address it in #191 ("with_counts"). I initially expected some of the "facets" (e.g. read) will be relatively slow, since without indexes they'll have to do a full table scan. While this is true, they're not much slower than a plain Here are some timings for an 100+ feeds, 11k entries database for various
|
To do (I'll only do the minimum of things, we can add stuff later):
|
Some manual ("page generated in about ... seconds") web app benchmarks, using my "production" database; I picked the smallest time of 5-10 refreshes: On my laptop:
On a t3a.nano instance:
As expected, getting counts for each feed does make /feeds slower (6-8 times). |
BTW, this took about 14h 😞
|
I have to go through all the entries to find out how many there are (total, read, important, per feed etc.).
The text was updated successfully, but these errors were encountered: