Masking Modalities for Cross-modal Video Retrieval - Apprentissage de modèles visuels à partir de données massives Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Masking Modalities for Cross-modal Video Retrieval

Résumé

Pre-training on large scale unlabelled datasets has shown impressive performance improvements in the fields of computer vision and natural language processing. Given the advent of large-scale instructional video datasets, a common strategy for pre-training video encoders is to use the accompanying speech as weak supervision. However, as speech is used to supervise the pre-training, it is never seen by the video encoder, which does not learn to process that modality. We address this drawback of current pre-training methods, which fail to exploit the rich cues in spoken language. Our proposal is to pre-train a video encoder using all the available video modalities as supervision, namely, appearance, sound, and transcribed speech. We mask an entire modality in the input and predict it using the other two modalities. This encourages each modality to collaborate with the others, and our video encoder learns to process appearance and audio as well as speech. We show the superior performance of our 'modality masking' pre-training approach for video retrieval on the How2R, YouCook2 and Condensed Movies datasets.
Fichier principal
Vignette du fichier
MMCVR.pdf (5.23 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03420133 , version 1 (09-11-2021)

Identifiants

Citer

Valentin Gabeur, Arsha Nagrani, Chen Sun, Karteek Alahari, Cordelia Schmid. Masking Modalities for Cross-modal Video Retrieval. WACV 2022 - Winter Conference on Applications of Computer Vision, Jan 2022, Waikoloa, United States. pp.1-10. ⟨hal-03420133⟩
215 Consultations
71 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More