1Institute for AI Industry Research (AIR), Tsinghua University
2Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University
3University of Science and Technology of China
4Nanjing University
5Microsoft Research Asia
Recent advances in Streaming Video Understanding has enabled a new interaction paradigm where models respond proactively to user queries. Current proactive VideoLLMs rely on per-frame triggering decision making, which suffers from an efficiency-accuracy dilemma. We propose Em-Garde, a novel framework that decouples semantic understanding from streaming perception. At query time, the Instruction-Guided Proposal Parser (IGPP) transforms user queries into structured, perceptually grounded visual proposals; during streaming, a Lightweight Proposal Matching Module (LPMM) performs efficient embedding-based matching to trigger responses. Experiments on StreamingBench and OVO-Bench demonstrate consistent improvements over prior models in proactive response accuracy and efficiency, validating an effective solution for proactive video understanding under strict computational constraints.
Unlike former Proactive Streaming VideoLLMs which make triggering decisions from scratch at every timestep, Em-Garde decouples semantic understanding from streaming perception, and extracts heavy instruction-related reasoning out of the streaming loop, enabling efficient and generalizable triggering decisions for proactive responses.
Em-Garde consists of two stages: (1) the Instruction-Guided Proposal Parser (IGPP) transforms user queries into structured, perceptually grounded visual proposals; and (2) the Lightweight Proposal Matching Module (LPMM) performs efficient embedding-based matching to trigger responses during streaming.
Em-Garde demonstrates superior proactive response abilities across different benchmarks, outperforming prior streaming VideoLLMs by a significant margin.
While optimized for proactive responses, Em-Garde maintains strong online video question-answering abilities, performing on par with or better than state-of-the-art models.
By decoupling heavy semantic reasoning from the streaming loop, Em-Garde achieves efficient real-time streaming inference. The lightweight LPMM maintains constant per-frame latency even on long videos, while delivering the highest throughput-accuracy trade-off among all compared methods.