FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation

1State Key Laboratory of General Artificial Intelligence (BIGAI),
2Department of Computer Science and Technology, Tsinghua University
3School of Artificial Intelligence, Beijing Normal University
4School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
* Equal Contribution, # Corresponding Author

Abstract

This paper investigates training better visual world models for robot manipulation, i.e., models that can predict future visual observations by conditioning on past frames and robot actions. Specifically, we consider world models that operate on RGB-D frames (RGB-D world models). As opposed to canonical approaches that handle dynamics prediction mostly implicitly and reconcile it with visual rendering in a single model, we introduce FlowDreamer, which adopts 3D scene flow as explicit motion representations. FlowDreamer first predicts 3D scene flow from past frame and action conditions with a U-Net, and then a diffusion model will predict the future frame utilizing the scene flow. FlowDreamer is trained end-to-end despite its modularized nature. We conduct experiments on 4 different benchmarks, covering both video prediction and visual planning tasks. The results demonstrate that FlowDreamer achieves better performance compared to other baseline RGB-D world models by 7% on semantic similarity, 11% on pixel quality, and 6% on success rate in various robot manipulation domains.

FlowDreamer

FlowDreamer applies 3D scene flow as a general motion representation for RGB-D world models, and explicitly predicts the scene flow by conditioning on past frames and robot actions.

Video Prediction

We show some video prediction results of FlowDreamer on RT-1 SimplerEnv and Language Table. In each video, the left image is the ground truth, the middle image is the predicted video, and the right image is the predicted scene flow.

Visual Planning

We show some prediction results on VP2 benchmark. In each video, the left image is the ground truth, the middle image is the predicted video, and the right image is the predicted scene flow.

Citation

@misc{guo2025flowdreamer,
  title={FlowDreamer: A RGB-D World Model with Flow-based Motion Representations for Robot Manipulation},
  author={Guo, Jun and Ma, Xiaojian and Wang, Yikai and Yang, Min and Liu, Huaping and Li, Qing},
  year={2025},
  primaryClass={cs.RO}
}