Skip to content

GeWu-Lab/cross-modal-distillation

Repository files navigation

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

PyTorch implementation of Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

Introduction

Cross-modal distillation has been widely used to transfer knowledge across different modalities, enriching the representation of the target unimodal one. Recent studies highly relate the temporal synchronization between vision and sound to the semantic consistency for cross-modal distillation. However, such semantic consistency from the synchronization is hard to guarantee in unconstrained videos, due to the irrelevant modality noise and differentiated semantic correlation.

To mitigate these issues, we first propose a Modality Noise Filter(MNF) module to erase the irrelevant noise in teacher modality with cross-modal context. After this purification, we then design a Contrastive Semantic Calibration (CSC) module to adaptively distill useful knowledge for target modality, by referring to the differentiated sample-wise semantic correlation in a contrastive fashion.

Extensive experiments show that our method could bring a performance boost compared with other distillation methods in both visual action recognition and video retrieval task. We also extend to the audio tagging task to prove the generalization of our method.

pipeline

Traing & Validation

Use the following commands to test on UCF51 dataset. The checkpoints of our model are in results dir.

  • train on UCF51
    sh scripts/ucf_train_script.sh
  • validation on UCF51
    sh scripts/ucf_test_script.sh
  • get retrieval result on UCF51
    sh retrieval/ucf_retrieval.sh
    python retrieval/mAP_result_ucf.py
    python retrieval/get_retrieval_result_ucf.py

Checkpoints

The dataset and checkpoints could download from here

Bibtex

@article{xia2023robust,
  title={Robust Cross-Modal Knowledge Distillation for Unconstrained Videos},
  author={Xia, Wenke and Li, Xingjian and Deng, Andong and Xiong, Haoyi and Dou, Dejing and Hu, Di},
  journal={arXiv preprint arXiv:2304.07775},
  year={2023}
} 

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published