VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding

Abstract
We explore how reconciling several foundation models (large language models and vision-language models) with a novel unified memory mechanism could tackle the challenging video understanding problem, especially capturing the long-term temporal relations in lengthy videos. In particular, the proposed multimodal agent VideoAgent: 1) constructs a structured memory to store both the generic temporal event descriptions and object-centric tracking states of the video; 2) given an input task query, it employs tools including video segment localization and object memory querying along with other visual foundation models to interactively solve the task, utilizing the zero-shot tool-use ability of LLMs. VideoAgent demonstrates impressive performances on several long-horizon video understanding benchmarks, an average increase of 6.6% on NExTQA and 26.0% on EgoSchema over baselines, closing the gap between open-sourced models and private counterparts including Gemini 1.5 Pro. The code and demo can be found at https://videoagent.github.io.
Authors
Yue Fan*, Xiaojian Ma*†, Rujie Wu, Yuntao Du, Jiaqi Li, Zhi Gao, Qing Li
Publication Year
2024
http://eng.bigai.ai/wp-content/uploads/sites/7/2024/09/ECCV_VideoAgent.pdf
Publication Venue
ECCV
Scroll to Top