Mesh2Tex: Generating Mesh Textures from Image Queries

1Technical University of Munich
2Carnegie Mellon University
ICCV 2023

Mesh2Tex learns realistic object texturing on a shape geometry through a hybrid mesh-field texture representation supporting high-resolution texture generation on various shape meshes. Our learned texture manifold supports texture transfer optimization from image queries, producing perceptually consistent texturing in this challenging content creation scenario.

Abstract

Remarkable advances have been achieved recently in learning neural representations that characterize object geometry, while generating textured objects suitable for downstream applications and 3D rendering remains at an early stage. In particular, reconstructing textured geometry from images of real objects is a significant challenge - reconstructed geometry is often inexact, making realistic texturing a significant challenge. We present Mesh2Tex, which learns a realistic object texture manifold from uncorrelated collections of 3D object geometry and photorealistic RGB images, by leveraging a hybrid mesh-neural-field texture representation. Our texture representation enables compact encoding of high-resolution textures as a neural field in the barycentric coordinate system of the mesh faces. The learned texture manifold enables effective navigation to generate an object texture for a given 3D object geometry that matches to an input RGB image, which maintains robustness even under challenging real-world scenarios where the mesh geometry approximates an inexact match to the underlying geometry in the RGB image. Mesh2Tex can effectively generate realistic object textures for an object mesh to match real images observations towards digitization of real environments, significantly improving over previous state of the art.

Video

Unconditional Texture Generation

We test Mesh2Tex for unconditional texturing for meshes from ShapeNet, in comparison with state of the art. Our hybrid mesh-field texture representation enables richer texture generation with finer-scale details than state-of-the-art baselines.

Texture Generation from Real Image Queries

Mesh2Tex is able to perform texture prediction from real images (ScanNet) to closely matching shape geometry (ShapeNet). Despite inexact geometry and pose, Mesh2Tex produces perceptually consistent texturing.

Texture Transfer

We also test Mesh2Tex for texture transfer from real images (ScanNet, CompCars) to arbitrary shape geometry of the same class category (ShapeNet). Under this challenging scenario, our NOC-guided patch optimization enables plausible texturing for image queries.

BibTeX


      @inproceedings{bokhovkin2023mesh2tex,
          title     = {Mesh2Tex: Generating Mesh Textures from Image Queries},
          author    = {Bokhovkin, Alexey and Tulsiani, Shubham and Dai, Angela},
          journal   = {2023 IEEE/CVF International Conference on Computer Vision (ICCV)},
          year      = {2023}
      }