Understanding the limits of neural rendering for 3D reconstruction

 

Overview

Modern 3D reconstruction techniques rely on pixel consistency across images to recover 3D geometry from 2D images. These methods include NeRFs [1] and Gaussian Splatting [2], which are the most accurate techniques in current state-of-the-art. However, they struggle on inputs with little contrast, objects which are transparent or reflective, and views which are too few or with limited angle.

 

Objectives

  • Collect inputs that are challenging for 3D reconstruction

  • Quantitatively evaluate these limitations and understand them with ablation studies

  • Compare various state of the art methods

  • Explore ways to mitigate these limitations

 

Prerequisites

  • Python proficiency, familiarity with Pytorch

  • Experience with running external projects in bash

  • Understanding of 3D camera model is a plus

 

Contact

Both bachelor and master students are welcome to apply. This project will be conducted in collaboration with CVLab. Contact [email protected] for more information.

 

References

[1] NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, ECCV 2020

[2] 3D Gaussian Splatting for Real-Time Radiance Field Rendering, SIGGRAPH 2023