Given noisy raw input images, our method recovers camera pose and depth maps, which we then use for dense neural reconstruction. Please use the tabs below to toggle between different exposure times, which correspond to different signal-to-noise ratios.
Our method recovers accurate camera poses and high-quality point clouds even in extremely low-light conditions. Below, we visualize the estimated point clouds for various scenes compared against baseline methods.
Below, we showcase neural reconstruction results on low-light images.
The videos visualize the input view, the reference novel view synthesis (NVS) and depth, followed by a comparison of Dark3R against Dark3RNeRF + MASt3RSfM, RawNeRF, and LE3D.
Please use the tabs below to toggle between the different scenes. Results are best viewed on a desktop.
Our pipeline allows for the reconstruction of High Dynamic Range (HDR) radiance fields from noisy raw input images.
@article{guo2026dark3r,
title={Dark3R: Learning Structure from Motion in the Dark},
author={Andrew Y Guo and Anagh Malik and SaiKiran Tedla and Yutong Dai and Yiqian Qin and Zach Salehe and Benjamin Attal and Sotiris Nousias and Kiriakos N. Kutulakos and David B. Lindell},
year={2026},
journal={arXiv},
url={https://arxiv.org/abs/2603.05330},
}