Jump to content

VideoBERT: A Joint Model for Video and Language Representation Learning


parthp

Recommended Posts

VideoBERT: A Joint Model for Video and Language Representation LearningĀ - ICCV 2019 -Ā Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, the paper proposes a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, the authors build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. VideoBERT is used n numerous tasks, including action classification and video captioning.Ā  VideoBERTĀ can be applied directly to open-vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, itĀ outperform the state-of-the-art on video captioning, and quantitative results verify that the model learns high-level semantic features.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...