UniSplat is a feed-forward reconstruction framework in autonomous driving scenarios. Unlike traditional methods that require dense, overlapping views and per-scene optimization, UniSplat achieves state-of-the-art performance through a novel unified 3D latent scaffold representation that seamlessly integrates spatial and temporal information.
output.mp4
- [Dec 7, 2025] Demo code and pretrained weights for the Waymo Dataset have been released. Demo for novel view synthesis (rotation and shift) and scene completion will be released soon.
First, clone this repository and install the dependencies.
git clone git@github.com:chenshi3/UniSplat.git
cd UniSplat
pip install -r requirements.txt
## install 3DGS rasterizer
pip install -e submodules/diff-gaussian-rasterization-feature
pip install -e submodules/simple-knn-v2
Then download the pretrained model weights and example data from Hugging Face.
Run UniSplat on the provided example data to test the model:
python demo.py --load_from /path/to/checkpoint.pth --data_path /path/to/demo_dataArguments:
--load_from: Path to the pretrained model checkpoint--data_path: Path to the directory containing example data
The script will process the input data and save the rendered images along with dynamic masks to the output directory.
Please consider citing our work as follows if it is helpful.
@article{shi2025unisplat,
title={UniSplat: Unified Spatio-Temporal Fusion via 3D Latent Scaffolds for Dynamic Driving Scene Reconstruction},
author={Shi, Chen and Shi, Shaoshuai and Lyu, Xiaoyang and Liu, Chunyang and Sheng, Kehua and Zhang, Bo and Jiang, Li},
journal={arXiv preprint arXiv:2511.04595},
year={2025}
}
UniSplat uses code from a few open source repositories. Without the efforts of these folks (and their willingness to release their implementations), UniSplat would not be possible. Thanks to these great repositories: VGGT, MoGe, Dino, Pi3, Feature 3DGS, Omni-Scene.