-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐞 [Certain files compressed with ZPAQ fail to compress successfully] #126
Comments
Thank you for the report. When I uncovered the bug that kept levels 1 and 2 of zpaq from working, it allowed for faster zpaq compression because all 5 zpaq levels could be used. That was So, first step is to test with cf: Fix Zpaq levels 1 and 2, and Maximize block size for zpaq |
Hello again. Using the Since zpaq is no longer maintained by @zpaq and the @fcorbelli version is not official, it may take some time to fix this. So, in the meantime, I recommend NOT using Suggestions for a workaround or help fixing the zpaq bug is appreciated! |
I am afraid that there will be no more official version of zpaq At least every time I asked for help from Dr. Mahoney, very kind actually, I could not get any concrete guidance. |
So after some testing, it appears that the THAT SAID, best practice would suggest that using a compression threshold saves both time and possible problems when the compression routine returns a block larger than sent! Using There are two possible resolutions.
Using threshold testing is not perfect as it does not review the entire chunk of data, just the beginning of it. So, forcing data testing prior to backend compression will not work 100% of the time, especially if the incompressible data is buried in the middle somewhere! I'll push changes soon. We'll see how it goes! |
I wouldn't revert zpaq compression to what it was previously, I'd rather see something different with zpaq that zpaq doesn't already use (I always use zpaq, sometimes this fork too). Compression is already good with lrzip in a lot of cases, so if anything that speeds up compression with a slight sacrifice in compression is usually the best way to go in my opinion. |
Addresses #126 reported by @Merculous. Zpaq compression would fail when `-T` option used on a block that was incompressible. There was some buffer overrun that caused the failure. Howver debugging dead code is out of scope. Error from lrzip-next 0.11.2 and earlier ======================================== Incompressible block double free or corruption (!prev) Aborted (core dumped) Revert to calling compress() function and let it handle calling compressBlock incrementally as it sends one block size block of data at a time. This possibly would also occur if any data filter is used since it also would bypass threhold testing.
lrzip-next Version
lrzip-next version 0.11.2-2-b936f23
lrzip-next command line
lrzip-next -T -z -L1 -vv "$@"
What happened?
Version 0.11.2-2-b936f23 produces corrupt files when compressing with ZPAQ.
Versions from before did compress just fine, I believe this is a issue from latest changes to ZPAQ itself. LZMA is the only other compression method I use and I've never had any issues with it.
What was expected behavior?
Should've compressed. Nothing to really add here.
Steps to reproduce
Download this file: https://drive.google.com/file/d/19VQf8fceSAFU_WewOeo8nt3Svsdj77TW/view?usp=sharing
File attached is a tar archive of a slightly modified version of https://github.com/acidanthera/OpenCorePkg/tree/master/Utilities/macrecovery but with the dmg's included.
It is roughly 6GB in total, no compression is used.
Use lrzip-next -T -z -L1 -vv macrecovery.tar
OR, for a much smaller test case:
clone my repo: https://github.com/Merculous/Jailbreak-Programs
(HEAD is only 471MB so this is probably a more suitable test case)
Extract all 7z archives, put all of the extracted archive contents into a folder, tar, then run the same command from above.
Relevant log output
Please provide system details
OS Distro:
Ubuntu 22.04.2 LTS x86_64
Kernel Version (uname -a):
Linux server 5.15.0-75-generic #82-Ubuntu SMP Tue Jun 6 23:10:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
System ram (free -h):
total used free shared buff/cache available
Mem: 125Gi 1.0Gi 77Gi 22Mi 46Gi 123Gi
Swap: 8.0Gi 0.0Ki 8.0Gi
Additional Context
free(): invalid size6%
[1] 2572128 IOT instruction (core dumped) lrzip-next -T -z -L2 -vv macrecovery.tar
munmap_chunk(): invalid pointer
[1] 3147239 IOT instruction (core dumped) lrzip-next -T -z -L3 -vv macrecovery.tar
lrzip-next -T -z -L4 -vv macrecovery.tar
Write of length 33,550,336 failed - Bad address
Failed to write_buf s_buf in compthread 11
Deleting broken file macrecovery.tar.lrz
Fatal error - exiting
Here's the sha256 of macrecovery.tar: bb3f87028529f7f5c2eb730d198a2b44a207aa6702c10f4e2938cc8e11a29103
I believe 5-9 produce the same issue as 4. I don't really want to wait a long time just to get same results so that's all I'm doing for now (I'm very tired as I'm writing this).
I hope this is enough info to start searching for bugs. Thanks in advanced!
Update: I've tested commit 94891e5 and compressed with the same options used for both tar archives mentioned, in this case with the tar'd contents from Jailbreak-Programs. It had successfully compressed. So, I'm assuming 2f84b91 may have broken compression in certain cases.
The text was updated successfully, but these errors were encountered: