These benchmarks were generated by running a computation equivalent to the following in various libraries on Node 10.40.0 on a 2015 MacBook Pro:
chainFrom([1, 2 /** ... */, , n])
.map(x => x * 2)
.filter(x => x % 5 !== 0)
.map(x => x + 1)
.filter(x => x % 2 === 0)
.toArray();
The benchmark source code can be found here.
n = 10
------
transducist x 1,092,348 ops/sec ±1.34% (89 runs sampled)
lodash (without chain) x 1,324,101 ops/sec ±0.74% (89 runs sampled)
lodash (with chain) x 362,054 ops/sec ±2.80% (83 runs sampled)
ramda x 476,214 ops/sec ±0.55% (93 runs sampled)
lazy.js x 2,652,169 ops/sec ±1.22% (90 runs sampled)
transducers.js x 1,099,321 ops/sec ±0.60% (90 runs sampled)
transducers-js x 687,221 ops/sec ±0.96% (93 runs sampled)
native array methods x 3,685,880 ops/sec ±0.45% (94 runs sampled)
hand-optimized loop x 19,250,719 ops/sec ±2.01% (89 runs sampled)
n = 100
-------
transducist x 240,872 ops/sec ±0.88% (93 runs sampled)
lodash (without chain) x 197,158 ops/sec ±0.60% (94 runs sampled)
lodash (with chain) x 142,932 ops/sec ±0.61% (94 runs sampled)
ramda x 196,973 ops/sec ±0.84% (87 runs sampled)
lazy.js x 478,335 ops/sec ±0.57% (94 runs sampled)
transducers.js x 235,804 ops/sec ±0.69% (92 runs sampled)
transducers-js x 164,375 ops/sec ±0.64% (92 runs sampled)
native array methods x 195,732 ops/sec ±0.58% (94 runs sampled)
hand-optimized loop x 2,654,405 ops/sec ±0.57% (89 runs sampled)
n = 1,000
---------
transducist x 25,987 ops/sec ±0.60% (91 runs sampled)
lodash (without chain) x 19,804 ops/sec ±0.58% (93 runs sampled)
lodash (with chain) x 18,537 ops/sec ±0.82% (88 runs sampled)
ramda x 27,283 ops/sec ±0.80% (92 runs sampled)
lazy.js x 56,104 ops/sec ±0.66% (94 runs sampled)
transducers.js x 25,595 ops/sec ±0.88% (89 runs sampled)
transducers-js x 19,098 ops/sec ±0.65% (94 runs sampled)
native array methods x 19,179 ops/sec ±0.67% (91 runs sampled)
hand-optimized loop x 247,362 ops/sec ±1.25% (90 runs sampled)
n = 10,000
----------
transducist x 2,623 ops/sec ±0.81% (94 runs sampled)
lodash (without chain) x 1,973 ops/sec ±0.54% (93 runs sampled)
lodash (with chain) x 1,954 ops/sec ±0.68% (93 runs sampled)
ramda x 2,980 ops/sec ±0.65% (93 runs sampled)
lazy.js x 5,921 ops/sec ±0.61% (93 runs sampled)
transducers.js x 2,724 ops/sec ±0.43% (94 runs sampled)
transducers-js x 1,910 ops/sec ±0.93% (93 runs sampled)
native array methods x 1,873 ops/sec ±0.92% (91 runs sampled)
hand-optimized loop x 26,402 ops/sec ±0.58% (91 runs sampled)
n = 100,000
-----------
transducist x 261 ops/sec ±0.53% (87 runs sampled)
lodash (without chain) x 141 ops/sec ±1.08% (79 runs sampled)
lodash (with chain) x 138 ops/sec ±1.11% (77 runs sampled)
ramda x 186 ops/sec ±1.17% (78 runs sampled)
lazy.js x 594 ops/sec ±0.72% (90 runs sampled)
transducers.js x 244 ops/sec ±0.52% (87 runs sampled)
transducers-js x 184 ops/sec ±0.65% (84 runs sampled)
native array methods x 33.03 ops/sec ±0.85% (57 runs sampled)
hand-optimized loop x 2,428 ops/sec ±0.34% (96 runs sampled)
- Lazy.js is the fastest by far, more than doubling the performance of any other library tested.
- After Lazy.js, Transducist's performance is competitive or superior to all other libraries tested. In particular, its performance is close to that of Ramda and transducers.js.
- Note, however, that while Ramda has comparable performance on this benchmark
task, its typical usage does not provide short-circuiting. In particular, if
the task were changed to add a
.take(10)
at the end of the chain, then Transducist would complete nearly instantly while Ramda would take just as long. - Lodash performed surprisingly poorly, and its chained API is slower than its non-chained one at all element counts.
- Native array methods are fast at low element counts, but are overtaken by chaining libraries at around 100 elements.
- Of course, the fastest of all is writing an optimized loop by hand, which is roughly 5x as fast as Lazy.js and 10x as fast as the other competitive libraries.