A scheme for reconstructing face from shading using smooth projected polygon representation NN

Mohamad Ivan Fanany, Masayoshi Ohno, Itsuo Kumazawa

Research output: Contribution to conferencePaperpeer-review

7 Citations (Scopus)

Abstract

In this paper, we present a neural-network learning scheme for face reconstruction. This scheme, which we called as Smooth Projected Polygon Representation Neural Network (SPPRNN), is able to successively refine the polygon's vertices parameter of an initial 3D shape based on depth-maps of several calibrated images taken from multiple views. The depth-maps, which are obtained by deploying Tsai-Shah shape-from-shading (SFS) algorithm, can be considered as partial 3D shapes of the face to be reconstructed. The reconstruction is finalized by mapping the texture of face images to the initial 3D shape. There are three interesting issues investigated in this paper concerning the effectiveness of this scheme. First, how effective the SFS provides partial 3D shapes compared to if we simply used 2D images. Secondly, how essential a smooth projected polygonal model is needed in order to approximate the face structure and enhance the convergence rate of this scheme. Thirdly, how an appropriate initial 3D shape should be selected and used in order to improve model resolution and learning stability. By carefully addressing those three issues, it was shown from our experiment that a compact and realistic 3D model of human (mannequin) face could be obtained.

Original languageEnglish
PagesII/305-II/308
Publication statusPublished - 2002
EventInternational Conference on Image Processing (ICIP'02) - Rochester, NY, United States
Duration: 22 Sept 200225 Sept 2002

Conference

ConferenceInternational Conference on Image Processing (ICIP'02)
Country/TerritoryUnited States
CityRochester, NY
Period22/09/0225/09/02

Fingerprint

Dive into the research topics of 'A scheme for reconstructing face from shading using smooth projected polygon representation NN'. Together they form a unique fingerprint.

Cite this