1University of Toronto
2Vector Institute
3York University
4Sony Corporation of America
5Harvard University
6Purdue University
Given noisy raw input images, our method recovers camera pose and depth maps, which we then use for dense neural reconstruction. Please use the tabs below to toggle between different exposure times, which correspond to different signal-to-noise ratios.
Our method recovers accurate camera poses and high-quality point clouds even in extremely low-light conditions. Below, we visualize the estimated point clouds for various scenes compared against baseline methods.
Below, we showcase neural reconstruction results on low-light images.
The videos visualize the input view, the reference novel view synthesis (NVS) and depth, followed by a comparison of Dark3R against Dark3RNeRF + MASt3RSfM, RawNeRF, and LE3D.
Please use the tabs below to toggle between the different scenes. Results are best viewed on a desktop.
Our pipeline allows for the reconstruction of High Dynamic Range (HDR) radiance fields from noisy raw input images.