Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a maximum allocation size when decoding #42

Merged
merged 1 commit into from
Jun 24, 2018

Conversation

flavray
Copy link
Owner

@flavray flavray commented Jun 23, 2018

Fixes #40

Upon bogus input data, decoding could allocate an insane amount of
memory (if length field is misencoded to a huge number) and panic.
To avoid this behaviour, this change introduces a limit (default 512MB)
to any allocation that can be done when decoding data.

Also return errors when decoding bogus int/long causing overflows.

Benchmark don't seem impacted by the change

New
running 14 tests
test bench_big_schema_read_100000_record   ... bench: 122,871,823 ns/iter (+/- 9,714,148)
test bench_big_schema_read_10000_record    ... bench:  12,737,304 ns/iter (+/- 2,848,853)
test bench_big_schema_read_100_record      ... bench:     134,765 ns/iter (+/- 19,624)
test bench_big_schema_read_1_record        ... bench:      16,293 ns/iter (+/- 7,916)
test bench_big_schema_write_10000_record   ... bench:   3,403,443 ns/iter (+/- 1,774,628)
test bench_big_schema_write_100_record     ... bench:      26,402 ns/iter (+/- 11,252)
test bench_big_schema_write_1_record       ... bench:       4,644 ns/iter (+/- 2,646)
test bench_file_quickstop_null             ... bench:   3,758,407 ns/iter (+/- 1,489,760)
test bench_small_schema_read_10000_record  ... bench:   1,803,131 ns/iter (+/- 211,538)
test bench_small_schema_read_100_record    ... bench:      20,728 ns/iter (+/- 16,671)
test bench_small_schema_read_1_record      ... bench:       3,730 ns/iter (+/- 2,076)
test bench_small_schema_write_10000_record ... bench:     342,308 ns/iter (+/- 148,995)
test bench_small_schema_write_100_record   ... bench:       5,753 ns/iter (+/- 2,938)
test bench_small_schema_write_1_record     ... bench:       2,418 ns/iter (+/- 1,512)

test result: ok. 0 passed; 0 failed; 0 ignored; 14 measured; 0 filtered out

     Running target/release/deps/serde_json-3308dbc1bdb10823

running 1 test
test bench_big_schema_json_read_10000_record ... bench:  43,558,406 ns/iter (+/- 8,259,033)

running 14 tests
test bench_big_schema_read_100000_record   ... bench: 122,871,823 ns/iter (+/- 9,714,148)
test bench_big_schema_read_10000_record    ... bench:  12,737,304 ns/iter (+/- 2,848,853)
test bench_big_schema_read_100_record      ... bench:     134,765 ns/iter (+/- 19,624)
test bench_big_schema_read_1_record        ... bench:      16,293 ns/iter (+/- 7,916)
test bench_big_schema_write_10000_record   ... bench:   3,403,443 ns/iter (+/- 1,774,628)
test bench_big_schema_write_100_record     ... bench:      26,402 ns/iter (+/- 11,252)
test bench_big_schema_write_1_record       ... bench:       4,644 ns/iter (+/- 2,646)
test bench_file_quickstop_null             ... bench:   3,758,407 ns/iter (+/- 1,489,760)
test bench_small_schema_read_10000_record  ... bench:   1,803,131 ns/iter (+/- 211,538)
test bench_small_schema_read_100_record    ... bench:      20,728 ns/iter (+/- 16,671)
test bench_small_schema_read_1_record      ... bench:       3,730 ns/iter (+/- 2,076)
test bench_small_schema_write_10000_record ... bench:     342,308 ns/iter (+/- 148,995)
test bench_small_schema_write_100_record   ... bench:       5,753 ns/iter (+/- 2,938)
test bench_small_schema_write_1_record     ... bench:       2,418 ns/iter (+/- 1,512)

test result: ok. 0 passed; 0 failed; 0 ignored; 14 measured; 0 filtered out

     Running target/release/deps/serde_json-3308dbc1bdb10823

running 1 test
test bench_big_schema_json_read_10000_record ... bench:  43,558,406 ns/iter (+/- 8,259,033)

test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out

     Running target/release/deps/single-ed0c4c0d15cd731f

running 2 tests
test bench_big_schema_write_record   ... bench:       1,470 ns/iter (+/- 208)
test bench_small_schema_write_record ... bench:         216 ns/iter (+/- 58)
master
running 14 tests
test bench_big_schema_read_100000_record   ... bench: 128,860,834 ns/iter (+/- 14,941,030)
test bench_big_schema_read_10000_record    ... bench:  12,426,272 ns/iter (+/- 1,840,947)
test bench_big_schema_read_100_record      ... bench:     145,451 ns/iter (+/- 25,811)
test bench_big_schema_read_1_record        ... bench:      16,981 ns/iter (+/- 5,188)
test bench_big_schema_write_10000_record   ... bench:   2,682,114 ns/iter (+/- 554,021)
test bench_big_schema_write_100_record     ... bench:      29,087 ns/iter (+/- 8,408)
test bench_big_schema_write_1_record       ... bench:       4,711 ns/iter (+/- 2,229)
test bench_file_quickstop_null             ... bench:   4,171,361 ns/iter (+/- 5,748,732)
test bench_small_schema_read_10000_record  ... bench:   2,255,414 ns/iter (+/- 1,124,975)
test bench_small_schema_read_100_record    ... bench:      23,436 ns/iter (+/- 12,716)
test bench_small_schema_read_1_record      ... bench:       4,151 ns/iter (+/- 4,216)
test bench_small_schema_write_10000_record ... bench:     397,890 ns/iter (+/- 88,784)
test bench_small_schema_write_100_record   ... bench:       6,349 ns/iter (+/- 1,322)
test bench_small_schema_write_1_record     ... bench:       2,400 ns/iter (+/- 577)

test result: ok. 0 passed; 0 failed; 0 ignored; 14 measured; 0 filtered out

     Running target/release/deps/serde_json-e71f91dffbbddc9a

running 1 test
test bench_big_schema_json_read_10000_record ... bench:  41,608,440 ns/iter (+/- 16,529,868)

test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out

     Running target/release/deps/single-d0d7f7569f4fedb1

running 2 tests
test bench_big_schema_write_record   ... bench:       1,289 ns/iter (+/- 451)
test bench_small_schema_write_record ... bench:         176 ns/iter (+/- 19)

make lint decided to change a bunch of stuff. Changes don't look terrible :)

Upon bogus input data, decoding could allocate an insane amount of
memory (if length field is misencoded to a huge number) and panic.
To avoid this behaviour, this change introduces a limit (default 512MB)
to any allocation that can be done when decoding data.

Also return errors when decoding bogus int/long causing overflows.
@flavray flavray requested a review from poros June 23, 2018 16:17
Copy link
Collaborator

@poros poros left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

@flavray flavray merged commit c7942e4 into master Jun 24, 2018
@flavray flavray deleted the u/flavr/max-allocation-bytes branch June 24, 2018 09:03
@flavray flavray mentioned this pull request Jun 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants