File(s) not publicly available

Deep Appearance Maps

conference contribution
posted on 2023-11-29, 18:11 authored by Maxim Maximov, Laura Leal-Taixe, Mario FritzMario Fritz, Tobias Ritschel
We propose a deep representation of appearance, i. e., the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have used deep learning to extract classic appearance representations relating to reflectance model parameters (e. g., Phong) or illumination (e. g., HDR environment maps). We suggest to directly represent appearance itself as a network we call a Deep Appearance Map (DAM). This is a 4D generalization over 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showing multiple materials to multiple deep appearance maps.


Preferred Citation

Maxim Maximov, Laura Leal-Taixe, Mario Fritz and Tobias Ritschel. Deep Appearance Maps. In: IEEE International Conference on Computer Vision (ICCV). 2019.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

IEEE International Conference on Computer Vision (ICCV)

Legacy Posted Date


Open Access Type

  • Gold


@inproceedings{cispa_all_2964, title = "Deep Appearance Maps", author = "Maximov, Maxim and Leal-Taixe, Laura and Fritz, Mario and Ritschel, Tobias", booktitle="{IEEE International Conference on Computer Vision (ICCV)}", year="2019", }

Usage metrics


    No categories selected


    Ref. manager