Efficient Temporal Extrapolation of Multimodal Large Language Models with Temporal Grounding Bridge

Abstract
Despite progress in multimodal large language models~(MLLMs), the challenge of interpreting long-form videos in response to linguistic queries persists, largely due to the inefficiency in temporal grounding and limited pre-trained context window size. In this work, we introduce Temporal Grounding Bridge (TGB), a novel framework that bootstraps MLLMs with advanced temporal grounding capabilities and broadens their contextual scope. Our framework significantly enhances the temporal capabilities of current MLLMs through three key innovations: an efficient multi-span temporal grounding algorithm applied to low-dimension temporal features projected from flow; a multimodal length extrapolation training paradigm that utilizes low-dimension temporal features to extend the training context window size; and a bootstrapping framework that bridges our model with pluggable MLLMs without requiring annotation. We validate TGB across seven video benchmarks and demonstrate substantial performance improvements compared with prior MLLMs. Notably, our model, initially trained on sequences of four frames, effectively handles sequences up to 16x longer without sacrificing performance, highlighting its scalability and effectiveness in real-world applications. Our code is publicly available.
Authors
Yuxuan Wang, Yueqian Wang, Pengfei Wu, Jianxin Liang, Dongyan Zhao, Yang Liu, Zilong Zheng*
Publication Year
2024
http://eng.bigai.ai/wp-content/uploads/sites/7/2024/10/EMNLP24_Efficient-Temporal-Extrapolation-of-Multimodal-Large-Language-Models.pdf
Publication Venue
EMNLP
Scroll to Top