Given a casually captured video with auto exposure, camera motion blur, and significant exposure time changes, we train 3DGS to reconstruct an HDR scene.
We design a unified model based on the physical image formation process, integrating camera motion blur and exposure-induced brightness variations.
This allows for the joint estimation of camera motion, exposure time, and camera response curve while reconstructing the HDR scene.
After training, our method can sharpen the train images and render HDR and LDR images from specified poses.