3D Model Reconstruction Using Gan and 2.5D Sketches from 2D Image

  • Quach Thi Bich Nhuong Dong Nai Technology University
  • Pham Dinh Sac Dong Nai Technology University
  • Nguyen Minh Nhut Dong Nai Technology University
  • Hien Thanh Le Dong Nai Technology University
Keywords: Reconstruction, Convolutional neural network, Deep learning, 2.5D sketch, 3D shape

Abstract

In the current 4.0 era, many fields such as medicine, cinema, architecture, etc. often use 3D models to visualize objects. However, there is not always enough information or equipment to build a 3D model. Another approach is to take multiple 2D images and convert to 3D shapes. This method requires information on images taken of objects at different angles. To get around this, we use a 2.5D sketch as an intermediary when going from 2D to 3D. A 2D photo is easier to create a 2.5D sketch than to convert directly to a 3D shape. In this paper, we propose a model consisting of three modules: The first is converting from 2D image to 2.5D sketch. The second is to go from a 2.5D sketch to a 3D shape. Finally, refine the newly created 3D shape. Experiments on the ShapeNet Core55 dataset show that our model gives better results than traditional models

References

D. Freitag. “The Role of 3D Displays in Medical Imaging Applications.” Internet: https://www.meddeviceonline.com, May. 18, 2015.

L. Landini et al. “3D Medical Image Processing,” in Image Processing in Radiology. Berlin: Heidelberg, 2008, pp. 67-85.

A. Patel and K. Mehta. “3D Modeling and Rendering of 2D Medical Image,” in International Conference on Communication Systems and Network Technologies, pp. 149-152, 2012

B. Landoni. “3D Scanning with Microsoft Kinect.” Internet: https://www.open-electronics.org, May. 6, 2015

T. Shubham et al. “Multi-view Supervision for Single-view,” in arXiv:1704.06254, pp. 1-9, 2017

C. B. Choy et al. “3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction,” in arXiv:1604.00449, pp. 1-17, 2016

R. Girdhar et al. “Learning a Predictable and Generative Vector Representation for Objects,” in arXiv:1603.08637v2, pp. 1-16, 2016

J. Wu et al. “Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling,” in NIPS’16, pp. 82–90, 2016

S. Tulsiani et al. “Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency,” in arXiv: 1704.06254, pp. 1-9, 2017

A. Krizhevsky et al. “Imagenet Classification with Deep Convolutional Neural Networks,” in NIPS, pp. 1097-1105, 2012

L. Do. “Generative Adversarial Networks (GANs).” Internet: https://ai.hblab.vn, Sep. 2017

A. Larsen et al. “Autoencoding Beyond Pixels using a Learned Similarity Metric,” in arXiv:1512.09300v2, pp. 2-4, 2016

Wu, Jiajun, et al. "Marrnet: 3d shape reconstruction via 2.5 d sketches." arXiv preprint arXiv:1711.03129 (2017)

He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on Computer vision and pattern recognition. 2016

Ketkar, Nikhil. "Stochastic gradient descent." Deep learning with Python. Apress, Berkeley, CA, 2017. 113-132.

L. Weng. “From GAN to WGAN?.” Internet: https://lilianweng.github.io, Aug. 2017

I. Gulrajan et al. “Improved Training of Wasserstein GANs,” in arXiv:1704.00028v3, pp. 1-5, 2017

D. P. Kingma and J. L. Ba. “Adam: A Method for Stochastic Optimization,” in arXiv:1412.6980, pp. 1-9, 2015

Di. Zhou et al. “IoU Loss for 2D/3D Object Detection,” in arXiv: 1908.03851, pp. 3-4, 2019

A. Chang et al. “Shapenet: An Information-rich 3D Model Repository,” in arXiv:1512.03012, pp. 2-6, 2015

T. Shubham et al. “Multi-view Supervision for Single-view,” in arXiv:1704.06254, pp. 1-9, 2017

J. Wu et al. “Learning Shape Priors for Single-View 3D Completion and Reconstruction,” in arXiv:1809.05068, pp. 1-14, 2018

Published
2022-09-29
How to Cite
[1]
Q. Bich Nhuong, P. Sac, N. Nhut, and H. Le, “3D Model Reconstruction Using Gan and 2.5D Sketches from 2D Image”, JTIP, vol. 15, no. 2, pp. 1-11, Sep. 2022.
Abstract viewed = 375 times
PDF downloaded = 333 times