Neural Part Priors: Learning to Optimize Part-Based Object Completion in RGB-D Scans

Technical University of Munich
CVPR 2023 (Highlight)

Our Neural Part Priors learn latent spaces of object part geometries, which we can use to fit to partial, real-world RGB-D scans of a scene to understand its complete object part decompositions

Abstract

3D object recognition has seen significant advances in recent years, showing impressive performance on real-world 3D scan benchmarks, but lacking in object part reasoning, which is fundamental to higher-level scene understanding such as inter-object similarities or object functionality. Thus, we propose to leverage large-scale synthetic datasets of 3D shapes annotated with part information to learn Neural Part Priors (NPPs), optimizable spaces characterizing geometric part priors. Crucially, we can optimize over the learned part priors in order to fit to real-world scanned 3D scenes at test time, enabling robust part decomposition of the real objects in these scenes that also estimates the complete geometry of the object while fitting accurately to the observed real geometry. Moreover, this enables global optimization over geometrically similar detected objects in a scene, which often share strong geometric commonalities, enabling scene-consistent part decompositions. Experiments on the ScanNet dataset demonstrate that NPPs significantly outperforms state of the art in part decomposition and object completion in real-world scenes.

Video

Results

We test our Neural Part Priors for semantic part completion on real-world RGB-D scans from the ScanNet dataset with Scan2CAD+PartNet ground truth. Our joint optimization across part priors enables more consistent, accurate part decompositions.

BibTeX


      @inproceedings{bokhovkin2022neuralparts,
          title     = {Neural Part Priors: Learning to Optimize Part-Based Object Completion in RGB-D Scans},
          author    = {Bokhovkin, Alexey and Dai, Angela},
          journal   = {2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
          year      = {2023}
      }