[PAST EVENT] Colloquium: Deep Appearance Modeling
Access & Features
- Open to the public
Colloquium: Deep Appearance Modeling
Colloquium Speaker: Prof. Pieter Peers from Computer Science at William & Mary
In computer graphics, appearance modeling aims to digitize the appearance of real-world objects with complex material properties such that they can be revisualized from novel viewpoints and under novel lighting conditions with the correct reflectance behavior. In this talk, I will present recent advances in appearance modeling using deep learning.
Deep learning methods often require vasts amounts of training data. In the case of appearance modeling, such training data consists of detailed reflectance maps that model local surface normal variations, diffuse color variations, and specular reflectance properties (e.g., roughness). In contrast to photographs which can be easily mined from internet resources, no such online resources exist for reflectance maps and obtaining such reflectance maps for a wide variety of materials is expensive and labor intensive. I will present two solutions that address this issue in the context of spatially-varying bidirectional reflectance distribution functions (SVBRDFs). The first solution is 'self-augmentation' where a coarse convolutional neural network is first trained on a small labeled training set, and subsequently augmented with a vast collection of unlabeled images. In a second work, I will show how a Generative Adversarial Network can model the distribution of higher dimensional data from multiple uncorrelated low dimensional projections. I will demonstrate the application of such multi-projection GANs for both SVBRDF distribution modeling from photographs as well as learning the shape distribution of a class of objects (e.g., chairs) from unannotated silhouette images.
Finally, another challenge common to most deep learning methods is that they require knowledge at training time of the number of inputs. This is problematic for appearance modeling where the number of measurements of a real-world material from which we desire to infer the reflectance maps are often not known beforehand. I will present a novel appearance modeling solution that marries a well established method from appearance modeling, namely inverse rendering, with deep learning, and directly optimize the reflectance parameters in a learned space.