-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fuzzing] add cifuzz workflow #460
Conversation
Signed-off-by: Arjun Singh <[email protected]>
Crash happen due to #452 . |
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation. |
@cpuschma What's going on? Is there some bug or something? Do I need to open a new PR? Or maybe you don't have rights to merge on workflow. |
update: there is a
Job: bug |
Trigger CI |
I am not familiar with Google's fuzzing library, but I would interpret the message as an indication that with enough weird input Correct me if I'm wrong here, this is not great, but no reason to hold back the PR and not merge first? |
RE: the memory spike - I just started up some local fuzzing of the same fuzzer using the following command: I definitely see some large memory usage (1.9 GB) happening. I haven't looked into why yet. In my experience this is often a call to |
I think it's best to address this in a separate PR afterwards. Let's get this merged first. |
Yes, malformed input is taking a lot of memory. But in my local run:
Everything looks fine. |
Update: If this |
Pretty sure I know of a case that can spike the memory because I had to fix it elsewhere.
Line 177 in 03cc78c
This means a malicious/malformed packet can cause a 2 GB You can fix this by setting This worked well in my fuzz testing. |
Mega lol, this is clever! |
Just noticed you added a limit in the go-asn1-ber libraries fuzzing functions: func FuzzDecodePacket(f *testing.F) {
// Set a limit on the length decoded in readPacket() since the call to
// make([]byte, length) can allocate up to MaxPacketLengthBytes which is
// currently 2 GB. This can cause memory related crashes when fuzzing in
// parallel or on memory constrained devices.
MaxPacketLengthBytes = 65536 I'd suggest to do the same in our fuzzing. It's unfortunate that this is a package-wide setting in the ASN1 BER library. We should look into some sort of encoder/decoder structs like |
Ah, yeah, I forgot I upstreamed that.
At least |
Depending on the size of the tree, 65KB might not be sufficient. I know atleast of three cases where the directory tree is huge. With that said, the response is split across multiple responses in most common DS. Unfortunately, this doesn't account for malicious servers. In addition, we can't just lower the maximum packet size as this would be a non-calculatable risk to break things. I think it would be for the best to address this in the ASN1 BER library first by creating encoders/decoders and then implement them in this library. |
Signed-off-by: Arjun Singh <[email protected]> Co-authored-by: Christopher Puschmann <[email protected]>
…o-ldap#466) Parallel and large amount of fuzzing data can create large amounts of allocated data and cause restricted fuzzing environments to crash (see go-ldap#460)
Add CIFuzz for PR fuzzing.