Technical Papers Fast-Forward
Dynamic Hair Modeling from Monocular Videos using Deep Neural Networks
Event Type
Technical Papers Fast-Forward
Registration Categories
TimeSunday, 17 November 201919:48 - 19:50
LocationGreat Hall 1&2
DescriptionWe introduce a deep-learning-based framework for modeling dynamic hairs from monocular videos, which could be captured by a commodity video camera or downloaded from Internet. The framework mainly consists of two network structures, i.e., \emph{HairSpatNet} for inferring 3D spatial features of hair geometry from 2D image features, and \emph{HairTempNet} for extracting temporal features of hair motions from video frames. The spatial features are represented as 3D occupancy fields depicting the hair shapes and 3D orientation fields indicating the hair strand directions. The temporal features are represented as bidirectional 3D warping fields, describing the forward and backward motions of hair strands cross adjacent frames. Both \emph{HairSpatNet} and \emph{HairTempNet} are trained with synthetic hair data. The spatial and temporal features predicted by the networks are subsequently used for growing hair stands with both spatial and temporal consistency. Experiments demonstrate that our method is capable of constructing high-quality dynamic hair models that resemble the input video as closely as those reconstructed by the state-of-the-art multi-view method, and compares favorably to previous single-view techniques.