Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement an optional post-YAML cache of loaded sections #119

Merged
merged 2 commits into from
Apr 30, 2018

Conversation

doudou
Copy link
Member

@doudou doudou commented Apr 24, 2018

See e354392 for details and benchmarks.

doudou added 2 commits April 24, 2018 12:59
The optimizations introduced by 2694f14
changed the semantic of the normalized configurations - namely by
allowing typelib objects to be stored, to speedup both loading and
application of the configuration.

Fix the tests that got broken by this.
This was motivated by the need to load huge config files, that define
splines. I'd rather not write a separate file that needs to be loaded
in C++. Luckily, it also speeds up even loading small files significantly

Using the cache on a file with two sections and a single boolean
property makes it 1.4x faster. A file with multiple huge 2000
elements numerical arrays (300k lines of YAML) is sped up by 8.5
times, reducing to a respectable 300ms from a 2s delay.

Benchmark results follow:

Small file:

~~~
Warming up --------------------------------------
            no cache   457.000  i/100ms
               cache   660.000  i/100ms
Calculating -------------------------------------
            no cache      4.630k (± 0.6%) i/s -     23.307k in   5.035237s
               cache      6.671k (± 0.7%) i/s -     33.660k in   5.048211s
                   with 95.0% confidence

Comparison:
               cache:     6670.8 i/s
            no cache:     4630.2 i/s - 1.44x  (± 0.01) slower
                   with 95.0% confidence
~~~

Big file:

~~~
Warming up --------------------------------------
            no cache     1.000  i/100ms
               cache     1.000  i/100ms
Calculating -------------------------------------
            no cache      0.361  (± 0.8%) i/s -      2.000  in   5.537452s
               cache      3.091  (± 1.3%) i/s -     16.000  in   5.179861s
                   with 95.0% confidence

Comparison:
               cache:        3.1 i/s
            no cache:        0.4 i/s - 8.56x  (± 0.15) slower
                   with 95.0% confidence
~~~
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants