In this paper, we present a neural-network learning scheme for face reconstruction. This scheme, which we called as Smooth Projected Polygon Representation Neural Network (SPPRNN), is able to successively refine the polygon's vertices parameter of an initial 3D shape based on depth-maps of several calibrated images taken from multiple views. The depth-maps, which are obtained by deploying Tsai-Shah shape-from-shading (SFS) algorithm, can be considered as partial 3D shapes of the face to be reconstructed. The reconstruction is finalized by mapping the texture of face images to the initial 3D shape. There are three interesting issues investigated in this paper concerning the effectiveness of this scheme. First, how effective the SFS provides partial 3D shapes compared to if we simply used 2D images. Secondly, how essential a smooth projected polygonal model is needed in order to approximate the face structure and enhance the convergence rate of this scheme. Thirdly, how an appropriate initial 3D shape should be selected and used in order to improve model resolution and learning stability. By carefully addressing those three issues, it was shown from our experiment that a compact and realistic 3D model of human (mannequin) face could be obtained.
|Publication status||Published - 1 Jan 2002|
|Event||International Conference on Image Processing (ICIP'02) - Rochester, NY, United States|
Duration: 22 Sep 2002 → 25 Sep 2002
|Conference||International Conference on Image Processing (ICIP'02)|
|Period||22/09/02 → 25/09/02|