Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data is preprocessed #33

Closed
hitymz opened this issue Jul 10, 2021 · 18 comments
Closed

data is preprocessed #33

hitymz opened this issue Jul 10, 2021 · 18 comments
Assignees
Labels
question Further information is requested

Comments

@hitymz
Copy link

hitymz commented Jul 10, 2021

is the data download by download_dataset.sh preprocessed?

@ThibaultGROUEIX
Copy link
Owner

Hi @hitymz,

I think it is only centered (it's a bit old now), what kind of preprocessing are you thinking about?

Cheers,
Thibault

@Sentient07
Copy link

Hello, I'm also having some issues with Finetuning/Re-training on FAUST training set. In essence, the accuracy seems to be poorer when I train/FineTune on the FAUST dataset than using the pre-trained weight. Could this be related to preprocessing? In order to preprocess the meshes, I am using the following functions (in the same order) from my_utils, 1) Scaling, 2) cleaning, 3) centering. Am I missing something?

Thank you

@ThibaultGROUEIX
Copy link
Owner

Hi @Sentient07,

Can you clarify what you are trying to do (train set, fine-tuning set, test set)? What is the accuracy you are referring to?

Best regards,
Thibault

@Sentient07
Copy link

Hello @ThibaultGROUEIX ,

Apologies for being unclear, I'm referring to the dense shape correspondence problem. I'm trying to compare 3D Coded with other methods on the FAUST-Remesh dataset. I train on the first 80 meshes and evaluate on the last 20. For this experiment, I consider ZoomOut, BCICP and two versions of 3D Coded. The first, denoted by TDC PTw refers to establishing correspondence using the weights you and others have released. In the second case, denoted by TDC FTw, I'm trying to Fine Tune, on the training meshes of the FAUST-Remesh dataset (using Unsupervised Loss). What I observe is that the accuracy sharply deteriorates in the second case. I was wondering what could be the reason for this? Am I pre-processing the dataset incorrectly?

image

@ThibaultGROUEIX
Copy link
Owner

There could be a number of reasons but I think the most probable is that you fine-tuning set is too small. 3D-CODED was trained on 230000 meshes. You could evaluate on you fine-tuning set to check if you observe overfitting.

@Sentient07
Copy link

Thank you @ThibaultGROUEIX for your very prompt response. The reason why I was expecting to see a much better result is due to the Table 1 in this paper and Figure 6. They claim to observe a good performance with 3DCoded when the training shapes match the poses of test shape irrespective of its number. Is this the case with 3D Coded or the AtlasNet? In my case, I observe the following reconstruction between shapes that belong to the training set, where I'm FineTuning , The ground truth meshes are attached below. Correspondence are color coded (from right the target, to left, the source)

image

image

@Sentient07
Copy link

Hello,

Just to confirm, the master seems much better than the v2.0.0 tag. I used the latter and the results seem much better and consistent with table 1 of the paper. I'm a little curious as what could be the reason? Thank you
image

@hitymz
Copy link
Author

hitymz commented Jul 19, 2021

@ThibaultGROUEIX Thanks for your answer, I want to know if the data set downloaded by the download_dataset.sh is 230,000 meshes?

@ThibaultGROUEIX
Copy link
Owner

@hitymz : Yes
@Sentient07 : I did a major refacto of the code, you should definitely use the latest version. I don't know why the v2.0.0 tag doesn't work, that should not be the case.

@ThibaultGROUEIX ThibaultGROUEIX self-assigned this Jul 19, 2021
@ThibaultGROUEIX ThibaultGROUEIX added the question Further information is requested label Jul 19, 2021
@Sentient07
Copy link

Sentient07 commented Jul 19, 2021

Hello @ThibaultGROUEIX thanks a lot for clarification. What made me use the 2.0.0 branch was that the pretrained weights seem to be a fit for that branch alone. (i.e, the pre-trained weights contain STN of PointNet encoder, which the master branch is missing). Can you also please provide the pretrained weights for the refactored branch if you'd still have them? Thank you.

@ThibaultGROUEIX
Copy link
Owner

ThibaultGROUEIX commented Jul 19, 2021

I am confused, you mean that the pretrained weights provided by the latest commit on master is not compatible with the latest code on master?

@Sentient07
Copy link

Hi @ThibaultGROUEIX , just to be sure if we're referring to the same model, I tried to reload the weights from here https://cloud.enpc.fr/s/n4L7jqD486V8IJn provided in this comment. Is it not the right one? From the name of the directory (and also the size), I assumed the one provided in the master branch is for Learning Elementary structures paper. Please let me know if I'm confused here. 😅

@ThibaultGROUEIX
Copy link
Owner

Right, in this comment the user wanted to use v2.0.0 because it has the unsupervised training code. So I provided the old model.

To use the latest code (the one I maintain), you need the latest model. You can get them by running : https://github.com/ThibaultGROUEIX/3D-CODED/blob/master/inference/download_trained_models.sh

Just to clarify, Learning Elementary Structure is a generalization of 3D-CODED. The script will download several models from Learning Elementary Structure. 3D-CODED is one of these models, under the folder /3D-CODED.

@Sentient07
Copy link

Sentient07 commented Jul 19, 2021

Hi @ThibaultGROUEIX thanks again for the elaborate response, I am facing this error while downloading. Actually there is no error, but just the zip file downloaded is empty.

$ bash -x inference/download_trained_models.sh 
+ gdrive_download 1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs learning_elementary_structure_trained_models.zip
++ wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs' -O-
++ sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p'
+ CONFIRM=
+ wget --load-cookies /tmp/cookies.txt 'https://docs.google.com/uc?export=download&confirm=&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs' -O learning_elementary_structure_trained_models.zip
--2021-07-19 17:25:40--  https://docs.google.com/uc?export=download&confirm=&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs
Resolving docs.google.com (docs.google.com)... 142.250.74.238, 2a00:1450:4007:80b::200e
Connecting to docs.google.com (docs.google.com)|142.250.74.238|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘learning_elementary_structure_trained_models.zip’

learning_elementary_structure_trained_models.zip         [ <=>                                                                                                                ]   3.05K  --.-KB/s    in 0s      

2021-07-19 17:25:40 (44.3 MB/s) - ‘learning_elementary_structure_trained_models.zip’ saved [3123]

+ rm -rf /tmp/cookies.txt
+ unzip learning_elementary_structure_trained_models.zip
Archive:  learning_elementary_structure_trained_models.zip
  End-of-central-directory signature not found.  Either this file is not
  a zipfile, or it constitutes one disk of a multi-part archive.  In the
  latter case the central directory and zipfile comment will be found on
  the last disk(s) of this archive.
note:  learning_elementary_structure_trained_models.zip may be a plain executable, not an archive
unzip:  cannot find zipfile directory in one of learning_elementary_structure_trained_models.zip or
        learning_elementary_structure_trained_models.zip.zip, and cannot find learning_elementary_structure_trained_models.zip.ZIP, period.

The download from the browser seems to work fine, but since I work from home, it'd be great to have this on the server too. Is there any way to fix this script?

@ThibaultGROUEIX
Copy link
Owner

Right, this is the same as ThibaultGROUEIX/AtlasNet#61
Can you try manually going to https://docs.google.com/uc?export=download&confirm=&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs and clicking download?

@Sentient07
Copy link

Sentient07 commented Jul 19, 2021

Hi @ThibaultGROUEIX yes, the web download worked. Thanks a lot for releasing all the data (including experiments) and not just your model. However, for some who might be in similar situation as me, it'd be easier for them to download the trained models alone for master branch from here. Since it doesn't cost a lot, I'm using my own Google drive : https://drive.google.com/drive/folders/1Fub5lpSrrJmV-kNF6ifQgkIzxqzd6gwr?usp=sharing .

Just a quick follow-up, you seem to not have used the --patch_deformation option. Is there a specific reason why this is enabled by default in the current code while it seemed to not have been used in the pre-trained weights?

@ThibaultGROUEIX
Copy link
Owner

Good point, no there are no good reason for keeping --patch_deformation the default , I guess when I refactored the code, i had Learning Elementary Structures en mind but I agree this flag could be disabled since it's 3D-CODED codebase.

@Sentient07
Copy link

Hello @ThibaultGROUEIX ,

I am now training and testing over (random, smaller subset of ) surreal. I found that the accuracy was quite low and it didn't converge. The dataset was generated by the script, generate_data_humans.py. When examining further, I found that the template is not in one-one correspondence with the generated surreal. What could I be doing wrong? Can you please help me find my mistake here? Thank you! (Attaching my code snippet)

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants