-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster paths for nested parsers #1036
Conversation
I wonder why |
Jitter :/. Running the benchmarks again yields this:
Which has numbers a little faster than before. I suspect that these micro-benchmarks have more jitter than the others, but none of this compensates for variations in whatever else my macOS is doing a the same time as the bench. |
Yeah, I came away with similar results. I was hoping the benchmarking library itself doing a bunch of sampling runs would help alleviate this somewhat, but 🤷 I know it's common to disregard "microbenchmarks", but I have a lot of use cases that use single primitive Zod schemas, so I still personally care about the primitive performance apart from how they are composed into something like As an aside: my dream here is to run benchmarks in CI and show diffs rather than the raw data. That would at least alleviate some of the issues with having other processes interfering with results. Thanks for all of your great performance work @tmcw! |
Co-authored-by: Scott Trinh <[email protected]>
Okay! Got another. Paths, specifically the pattern
[...ctx.path, key]
is a performance hotspot. This PR replaces that pattern with an object,ParseInputLazyPath
, which isParseInput
, but instead of concatenating the path immediately, only creates the new array if it's needed (if there is an error). This yields a 20-30% performance boost on inputs like the realworld benchmark and the object benchmarks, and I can't easily profile it, but should reduce memory overhead a bit too.Before
After