You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be interesting to see memory usage plots, just as we have time plots.
The existing data has max_rss for each of the benchmark runs -- as a start, we can use that (though it's pretty large-grained). If that doesn't seem helpful, maybe we investigate add a small amount of memory tracking to the pystats and using that.
This is based on the “max_rss” support that pyperf has had for a long time, for which we already have data. This is pretty “coarse grained” and effected by other things going on on the system. On the other hand, this is probably what really matters when most people consider “running out of memory”.
As shown by this A/A test, the variability in the results is only 0.16% overall, so is much better than the ~2% we see for timings.
If we need a more accurate number, we can add stats about the size of all objects allocated / deallocated (probably as part of the pystats system).
We also don't have any data on Windows, since pyperf doesn't support the equivalent system call there.
The summary tables now include a memory usage change number (the last number (Y.Zx m)):
Each benchmarking summary page includes the memory change number and a link to a plot of per-benchmark memory change against its immediate base.
It would be interesting to see memory usage plots, just as we have time plots.
The existing data has
max_rss
for each of the benchmark runs -- as a start, we can use that (though it's pretty large-grained). If that doesn't seem helpful, maybe we investigate add a small amount of memory tracking to the pystats and using that.Cc: @brandtbucher (for the idea)
The text was updated successfully, but these errors were encountered: