Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tests] Fix build by fixing tqdm version #62

Merged
merged 1 commit into from
May 23, 2019
Merged

Conversation

apsdehal
Copy link
Contributor

No description provided.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label May 23, 2019
@apsdehal apsdehal merged commit 537c24a into master May 23, 2019
@apsdehal apsdehal deleted the fix-travis-build branch May 24, 2019 04:25
apsdehal added a commit that referenced this pull request May 8, 2020
This PR implements MMBT model described in https://arxiv.org/pdf/1909.02950.pdf.

Salient features:
- Two training modes: pretraining and classification
- Works with both images as well as features
- Can use any underlying encoder
- Starter configurations provided for Hateful Memes and MaskedCOCO

We provide two modes of training mmbt, pretraining and classification which are configurable through `training_head_type` configuration parameters. We also provide starter configuration that can be used to compose training configs for your use case. Sample configurations are provided for Hateful Memes and Masked COCO dataset. The model can work with either direct images or features provided from faster rcnn as is used in other Pythia models.

A base is provided for easy building of further baselines. 

Follow the steps in #62 to build the setup for hateful memes dataset and then test following commands:

- For training on hateful memes with images:
```sh
python -u tools/run.py training.batch_size=16 config=projects/mmbt/configs/hateful_memes/defaults.yaml  dataset=hateful_memes  model=mmbt training.log_interval=10 training.find_unused_parameters=True  training.num_workers=2
```

- For training on hateful memes but with preextracted features:
```sh
python -u tools/run.py training.batch_size=16 config=projects/mmbt/configs/hateful_memes/with_features.yaml  dataset=hateful_memes  model=mmbt training.log_interval=10 training.find_unused_parameters=True training.num_workers=2
```

Both of the above commands invoke the classification MMBT model and #62 to be landed before they are run

- For running pretraining on masked_coco with preextracted features run:
```sh
python -u tools/run.py training.batch_size=16 config=projects/mmbt/configs/masked_coco/defaults.yaml  dataset=masked_coco  model=mmbt training.log_interval=10 training.find_unused_parameters=True training.num_workers=2
```
apsdehal added a commit that referenced this pull request May 8, 2020
This PR implements MMBT model described in https://arxiv.org/pdf/1909.02950.pdf.

Salient features:
- Two training modes: pretraining and classification
- Works with both images as well as features
- Can use any underlying encoder
- Starter configurations provided for Hateful Memes and MaskedCOCO

We provide two modes of training mmbt, pretraining and classification which are configurable through `training_head_type` configuration parameters. We also provide starter configuration that can be used to compose training configs for your use case. Sample configurations are provided for Hateful Memes and Masked COCO dataset. The model can work with either direct images or features provided from faster rcnn as is used in other Pythia models.

A base is provided for easy building of further baselines. 

Follow the steps in #62 to build the setup for hateful memes dataset and then test following commands:

- For training on hateful memes with images:
```sh
python -u tools/run.py training.batch_size=16 config=projects/mmbt/configs/hateful_memes/defaults.yaml  dataset=hateful_memes  model=mmbt training.log_interval=10 training.find_unused_parameters=True  training.num_workers=2
```

- For training on hateful memes but with preextracted features:
```sh
python -u tools/run.py training.batch_size=16 config=projects/mmbt/configs/hateful_memes/with_features.yaml  dataset=hateful_memes  model=mmbt training.log_interval=10 training.find_unused_parameters=True training.num_workers=2
```

Both of the above commands invoke the classification MMBT model and #62 to be landed before they are run

- For running pretraining on masked_coco with preextracted features run:
```sh
python -u tools/run.py training.batch_size=16 config=projects/mmbt/configs/masked_coco/defaults.yaml  dataset=masked_coco  model=mmbt training.log_interval=10 training.find_unused_parameters=True training.num_workers=2
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants