-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Encode: lossless encoding broken #892
Comments
Also, even if encoding is lossy, this bug will reduce PSNR on resulting image |
Hi, how long has this issue been around? Has it been around for a while and just recently spotted? |
@kieranjol this issue would probably lie in the guts of the codec, so would guess that it has been around pretty much forever. Nobody has really touched this part of the code for years. |
We also still have broken lossless encode with modes RESTART and BYPASS, more details in issue #674 |
And we also have broken tiled lossless encode, as described in issue #737 |
@boxerab Title of this issue is misleading. We have indeed a bug spotted in #891 occuring for certain combination of image size. This should be addressed as soon as possible and if you have any input on this, please do so in #891. But AFAIU it is not related to the specific lossless capability of the codec. Saying that the library should not be used for lossless encoding until this is fixed appears completely overkill to me. |
@detonin May I disagree? From an archival point of view, this is indeed a major issue, therefore the recommendation to wait until it is fixed is the accurate one, in my opinion. |
I am alarmed by any issue that turns lossless into lossy; I opened this issue to get other people alarmed. Seems to be working :) In addition to impact on archive, any loss of data in medical imaging has serious legal and patient safety ramifications. |
@retokromer I'm not saying this is not an important issue. My point is:
|
One solution to this issue is to have less number of decomposition for the tile that has a dimension smaller than 2^(number of wavelet decomposition) during the encoding. Write a COD marker in the header of this kind of tiles to indicate the number of decomposition for the tile. The decoder then can get the correct number of decomposition for this tile from the COD marker and decode accordingly. |
Interesting idea. But, I think this would only mask the bug - better to understand why this is happening and fix the root cause. The bug might have other symptoms that we are not aware of. |
As stated in the previous comments, the root cause of this issue is that the tile dimension at the last row or column can be smaller than 2^(number of wavelet decomposition-1), which is kind of common. Thus these tiles cannot have the same number of resolutions as other tiles (dimension >= 2^(number of wavelet decomposition-1)). The JPEG2000 spec allows COD marker in the header of the tile to indicate the different number of resolutions than the others. That is the way to encode this kind of tiles. For the current release, the workaround to this issue is to carefully select the tilesize parameter and number_of_resolutions parameter so that the dimensions of the tiles at the last row or column is not smaller than 2^(number of wavelet decomposition-1). Note: number_of_resolutions = number_of_wavelet_decompostion +1. |
We don't know the root cause. It is perfectly legal according to the standard to have as many as 32 decomposition levels on a tile. So, better to fix the bug than mask it with a COD marker. |
If the tile dimension size is less then 2^32, it cannot have 32 decomposition levels because each wavelet decomposition reduces the size (x and y) by half. |
It is legal to have 32 decompositions, for precisely the situation we have here, where tiles on the boundary can be as little as one pixel deep or wide, so there are in a sense too many decompositions for these tiles, and yet according to the standard, we don't need to define a special COD marker for those tiles. Some of these subbands will be empty, but that is perfectly acceptable. |
If we want to automatically adjust the number of decompositions in the encoder/decoder, we could have the following fix in tcd.c, replace line 2210 in the master_2.1.2 release version:
I don't have the environment to upload the fix through github. It worked for me locally on the master_2.1.2 release version. Anyone can help try the fix and see if it works on the main branch? |
@fechen123 Thanks for the proposal. However I indeed tend to agree with @boxerab, number of decompositions can be as high as 32, even for images / tiles smaller than 32x32. Using your fix would mask the bug but not solve it. |
That is all right. I guess I have different understanding of the standard. I would be interested in seeing the real fix. |
…ain#892) There are situations where, given a tile size, at a resolution level, there are sub-bands with x0==x1 or y0==y1, that consequently don't have any valid codeblocks, but the other sub-bands may be non-empty. Given that we recycle the memory from one tile to another one, those ghost codeblocks might be non-0 and thus candidate for packet inclusion.
Fixed per #949 |
For various combinations of image size and number of resolutions, with default settings i.e. mathematically lossless, image resulting from round trip encode/decode does not equal original. See issue #891 for an example of a broken lossless encode.
Until this is fixed, the library should not be used for lossless encoding, as it is unpredictable whether result is in fact lossless.
The text was updated successfully, but these errors were encountered: