HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images

Nanyang Wang extsuperscript{1}*, Yinda Zhang extsuperscript{2}*, Zhuwen Li extsuperscript{3}*, Yanwei Fu extsuperscript{4}, Wei Liu extsuperscript{5}, Yu-Gang Jiang extsuperscript{1}†

Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images

Abstract

We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.

Benchmarks

BenchmarkMethodologyMetrics
3d-object-reconstruction-on-data3dr2n2Pixel2Mesh
Avg F1: 59.72

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images | Papers | HyperAI