ReVSeg :
Incentivizing the Reasoning Chain for Video Segmentation with Reinforcement Learning

Yifan Li1,2, Yingda Yin3, Lingting Zhu3, Weikai Chen3, Shengju Qian3, Xin Wang3 Yanwei Fu1,2
1Fudan University, 2Shanghai Innovation Institute, 3LIGHTSPEED
(Left) Through an explicit reasoning chain, our ReVSeg tackles reasoning-focused video object segmentation and accurately grounds objects referenced by complex, abstract real-world queries.
(Right) While the base model and its RL variant struggle on the task, our method achieves strong performance, with RL post-training yielding a further substantial boost. We report the $\mathcal{J}\&\mathcal{F}$ metric on Ref-DAVIS17 (in-domain) and ReasonVOS (out-of-domain) datasets in the chart.

Video Demo

Video Demo Coming Soon

Stay tuned for exciting visual results!

Abstract

Reasoning-centric video object segmentation is an inherently complex task: the query often refers to dynamics, causality, and temporal interactions, rather than static appearances. Yet existing solutions generally collapse these factors into simplified reasoning with latent embeddings, rendering the reasoning chain opaque and essentially intractable. We therefore adopt an explicit decomposition perspective and introduce ReVSeg, which executes reasoning as sequential decisions in the native interface of pretrained vision language models (VLMs). Rather than folding all reasoning into a single-step prediction, ReVSeg executes three explicit operations -- semantics interpretation, temporal evidence selection, and spatial grounding -- aligning pretrained capabilities. We further employ reinforcement learning to optimize the multi-step reasoning chain, enabling the model to self-refine its decision quality from outcome-driven signals. Experimental results demonstrate that ReVSeg attains state-of-the-art performances on standard video object segmentation benchmarks and yields interpretable reasoning trajectories.

Method

Overview of ReVSeg. The model runs a two-turn reasoning chain over the input video and query. Round one analyzes the scene and selects an informative keyframe with a concise object description. Round two grounds the target on that keyframe by predicting a bounding box. The keyframe-bbox pair conditions a video tracker to produce full segmentation sequence. A reward manager provides concise signals to post-train the VLM via reinforcement learning, improving keyframe selection, grounding accuracy, and overall robustness.

Experiments

Qualitative Cases

Qualitative cases of ReVSeg. The frame highlighted in red indicates the selected keyframe. The green bounding box within the enlarged keyframe on the right size represents the grounding result.

Training Logs

Format Reward

(a) Format reward $r_f$ rapidly converges to a full score and remains saturated.

Temporal Reward

(b) Temporal reward $r_t$ increases steadily with training.

Spatial Reward

(c) Spatial reward $r_s$ increases steadily with training.

Response Length

(d) Response length remains stable overall without collapse.

Total Reward

(e) Total reward $r$ rises consistently over time.

Number of Turns

(f) Average number of rollout turns quickly converge to 2.

Training curves of ReVSeg.

BibTeX

@article{li2025revseg,
      title={ReVSeg: Incentivizing the Reasoning Chain for Video Segmentation with Reinforcement Learning},
      author={Li, Yifan and Yin, Yingda and Zhu, Lingting and Chen, Weikai and Qian, Shengju and Wang, Xin and Fu, Yanwei},
      journal={arXiv preprint arXiv:2512.02835},
      year={2025}
    }