DoraemonGPT: Toward Understanding Dynamic Scenes
with Large Language Models
(Exemplified as A Video Agent)

ReLER, CCAI, Zhejiang University
Corresponding Author
ICML 2024

Overview. Given a video with a question/task, DoraemonGPT first extracts a Task-related Symbolic Memory, which has two types of memory for selection: space-dominant memory based on instances and time-dominant memory based on time frames/clips. The memory can be queried by sub-task tools, which are driven by LLMs with different prompts and generate symbolic language (i.e., SQL sentences) to do different reasoning. Also, other tools for querying external knowledge or utility tools are supported. For planning, DoraemonGPT employs the MCTS Planner to decompose the question into an action sequence by exploring multiple feasible N solutions, which can be further summarized into an informative answer.

Abstract

Recent LLM-driven visual agents mainly focus on solving image-based tasks, which limits their ability to understand dynamic scenes, making it far from real-life applications like guiding students in laboratory experiments and identifying their mistakes. Hence, this paper explores DoraemonGPT, a comprehensive and conceptually elegant system driven by LLMs to understand dynamic scenes. Considering the video modality better reflects the ever-changing nature of real-world scenarios, we exemplify DoraemonGPT as a video agent. Given a video with a question/task, DoraemonGPT begins by converting the input video into a symbolic memory that stores task-related attributes. This structured representation allows for spatial-temporal querying and reasoning by well-designed sub-task tools, resulting in concise intermediate results. Recognizing that LLMs have limited internal knowledge when it comes to specialized domains (e.g., analyzing the scientific principles underlying experiments), we incorporate plug-and-play tools to assess external knowledge and address tasks across different domains. Moreover, a novel LLM-driven planner based on Monte Carlo Tree Search is introduced to explore the large planning space for scheduling various tools. The planner iteratively finds feasible solutions by backpropagating the result's reward, and multiple solutions can be summarized into an improved final answer. We extensively evaluate DoraemonGPT's effectiveness on three benchmarks and several in-the-wild scenarios.

In-the-wile Example — Inspection of Experiment

An in-the-wild example of DoraemonGPT. Given a video input and a question, our system automatically explores the solution space powered by MCTS planner and various tools. This figure demonstrates both the utilized tools, and the result of intermediate steps during the exploration. Taking advantage of the tree-like exploration paths, DoraemonGPT can not only summarize collected answers into a better one, but also has the potential to generate multiple potential solutions.

Qualitative Result — Referring Object Segmentation

In-the-wile Example — Video Editing

In-the-wile Example — Video Understanding

Qualitative Result — Video Question Answering

Quantitative Result — Video Question Answering

Quantitative Result — Referring Object Segmentation

BibTeX

@inproceedings{yang2024doraemongpt,
    title={Doraemongpt: Toward understanding dynamic scenes with large language models (exemplified as a video agent)},
    author={Yang, Zongxin and Chen, Guikun and Li, Xiaodi and Wang, Wenguan and Yang, Yi},
    booktitle={ICML},
    year={2024},
}