Em-Garde: A Propose-Match Framework for Proactive Streaming Video Understanding

Yikai Zheng1,2, Xin Ding3, Yifan Yang5, Shiqi Jiang5, Hao Wu4, Qianxi Zhang5, Weijun Wang1, Ting Cao1, Yunxin Liu1

1Institute for AI Industry Research (AIR), Tsinghua University    2Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University
3University of Science and Technology of China    4Nanjing University    5Microsoft Research Asia

Paper Code 🤗 Model

TL;DR

Video Demo

Abstract

Recent advances in Streaming Video Understanding has enabled a new interaction paradigm where models respond proactively to user queries. Current proactive VideoLLMs rely on per-frame triggering decision making, which suffers from an efficiency-accuracy dilemma. We propose Em-Garde, a novel framework that decouples semantic understanding from streaming perception. At query time, the Instruction-Guided Proposal Parser (IGPP) transforms user queries into structured, perceptually grounded visual proposals; during streaming, a Lightweight Proposal Matching Module (LPMM) performs efficient embedding-based matching to trigger responses. Experiments on StreamingBench and OVO-Bench demonstrate consistent improvements over prior models in proactive response accuracy and efficiency, validating an effective solution for proactive video understanding under strict computational constraints.

Method

Teaser figure showing Em-Garde's approach

Unlike former Proactive Streaming VideoLLMs which make triggering decisions from scratch at every timestep, Em-Garde decouples semantic understanding from streaming perception, and extracts heavy instruction-related reasoning out of the streaming loop, enabling efficient and generalizable triggering decisions for proactive responses.

Em-Garde framework overview

Em-Garde consists of two stages: (1) the Instruction-Guided Proposal Parser (IGPP) transforms user queries into structured, perceptually grounded visual proposals; and (2) the Lightweight Proposal Matching Module (LPMM) performs efficient embedding-based matching to trigger responses during streaming.

Experiments

Proactive Response Results

Em-Garde demonstrates superior proactive response abilities across different benchmarks, outperforming prior streaming VideoLLMs by a significant margin.

Proactive Response Results

Online VideoQA Results

While optimized for proactive responses, Em-Garde maintains strong online video question-answering abilities, performing on par with or better than state-of-the-art models.

Online VideoQA Results

Efficiency

By decoupling heavy semantic reasoning from the streaming loop, Em-Garde achieves efficient real-time streaming inference. The lightweight LPMM maintains constant per-frame latency even on long videos, while delivering the highest throughput-accuracy trade-off among all compared methods.

Efficiency: throughput vs. accuracy Efficiency: streaming inference time over video length

Citation

@misc{zheng2026emgarde, title={Em-Garde: A Propose-Match Framework for Proactive Streaming Video Understanding}, author={Yikai Zheng and Xin Ding and Yifan Yang and Shiqi Jiang and Hao Wu and Qianxi Zhang and Weijun Wang and Ting Cao and Yunxin Liu}, year={2026}, eprint={2603.19054}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2603.19054}, }