Interactive 4D Visualization

Explore the 4D reconstruction results of UniCon3R on various dynamic scenes. Use the controls below to orbit the scene, zoom, and fly through the reconstruction.

Left click Drag with left click to rotate view
Scroll wheel Scroll to zoom in/out
Right click Drag with right click to move view
Moving forward and backward
Moving left and right
Moving upward and downward

Abstract

We introduce UniCon3R (Unified Contact-aware 3D Reconstruction), a unified feed-forward framework for online human-scene 4D reconstruction from monocular videos. Recent feed-forward methods enable real-time world-coordinate human motion and scene reconstruction, but they often produce physically implausible artifacts such as bodies floating above the ground or penetrating parts of the scene. The key reason is that existing approaches fail to model physical interactions between the human and the environment. A natural next step is to predict human-scene contact as an auxiliary output, yet we find this alone is not sufficient: contact must actively correct the reconstruction. To address this, we explicitly model interaction by inferring 3D contact from the human pose and scene geometry and use the contact as a corrective cue for generating the final pose. This enables UniCon3R to jointly recover high-fidelity scene geometry and spatially aligned 3D humans within the scene. Experiments on standard human-centric video benchmarks such as RICH, EMDB, 3DPW and SLOPER4D show that UniCon3R outperforms state-of-the-art baselines on physical plausibility and global human motion estimation while achieving real-time online inference. We experimentally demonstrate that contact serves as a powerful internal prior rather than just an external metric, thus establishing a new paradigm for physically grounded joint human-scene reconstruction.

Method Overview

UniCon3R pipeline

UniCon3R extends Human3R with two tightly coupled mechanisms. First, a scene-aware contact prompt fuses current-frame scene features, recurrent scene memory, local metric geometry, and temporal contact history to build a physically meaningful interaction token. Second, contact-guided latent refinement feeds the refined contact token back into the human branch before SMPL-X regression, turning contact from an auxiliary readout into an internal corrective prior.

The result is a unified recurrent reconstruction pipeline that preserves feed-forward efficiency while producing more physically grounded human motion and better body-scene alignment.

Contact Prediction

UniCon3R predicts dense per-vertex contact probabilities and uses them internally to refine the human reconstruction rather than treating contact as a detached side output.

UniCon3R contact comparison

Qualitative contact comparison against DECO on in-the-wild web videos. Contact vertices are shown in green on the mesh surface.

Global Trajectory Grounding

On EMDB-2, UniCon3R improves world-aligned trajectory quality and reduces drift relative to Human3R while preserving a unified streaming pipeline.

Global motion comparison

Local Mesh Reconstruction

UniCon3R preserves competitive local mesh accuracy on SLOPER4D and 3DPW while substantially reducing maximum scene penetration on SLOPER4D.

Local mesh reconstruction comparison