-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The quantitative results in the paper cannot be reproduced #1
Comments
Hello, zcd15, Some top and down regions of RGB of standford2D3D are blank. Therefore, we have a pre-process to make it look like the images in Matterport3D (see Fig.4 in the paper). First, we have converted the equirectangular images to the cubemap images. Then we inpainted the blank regions of the top and down faces and converted the processed cubemap images to the equirectangular ones. You may not have pre-processed the equirectangular images of Standford2D3D like us so that you could not obtain the results. It is convenient to use the provided E2C (datasets/util.py) and C2E (networks/layers.py) and the inpainting function in OpenCV to implement the preprocessing. You may need to pre-process Standford2D3D yourself, as I cannot find our original pre-processing code currently. Hualie |
Thanks! |
Hello, I wonder if you have implemented the preprocessing code described by the author for image patching of the StandFord 2D 3D dataset. Can you share it? Thank you! |
Hi, I apply the code from the link for image inpaiting of Stanford2D3D dataset as suggested by the author. However, it's not clear to me whether this process is the same as the author's implementation. |
|
分享文件:dataloader.zip |
非常感谢,我使用了这个代码进行了stanford2d3d数据集预处理,然后测试了以下,还是跑不出作者的数据, 不过比之前确实是提高了 |
Dear all, I have recently written the code for preprocessing Standford2d3d. The results are |
Hi, I have tested the Stanford2D3D dataset with the given model parameters, but the results are vary different with the quantitative results in the paper. Have you uploaded the wrong model parameters? The results with the given parameters are
Besides, I have tried to retrained the network on Stanford2D3D with PyTorch of the same version with your codes, but the results are worse than that in the paper too. My reproduced results are
Is there any difference between the given codes and the implementation used in the paper?
The text was updated successfully, but these errors were encountered: