2017
DOI: 10.1167/17.12.5
|View full text |Cite
|
Sign up to set email alerts
|

A parametric texture model based on deep convolutional features closely matches texture appearance for humans

Abstract: Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(27 citation statements)
references
References 84 publications
(91 reference statements)
0
27
0
Order By: Relevance
“…The LOO is a measure of a model's predictive accuracy. In the following comparisons, we report the model-specific LOO information criterion (LOOIC) and the LOOIC difference between the two models, as well as their standard errors of the LOOIC (see also Wallis et al, 2017, for a similar procedure). The LOOIC estimates the expected log pointwise predictive density.…”
Section: Methodsmentioning
confidence: 99%
“…The LOO is a measure of a model's predictive accuracy. In the following comparisons, we report the model-specific LOO information criterion (LOOIC) and the LOOIC difference between the two models, as well as their standard errors of the LOOIC (see also Wallis et al, 2017, for a similar procedure). The LOOIC estimates the expected log pointwise predictive density.…”
Section: Methodsmentioning
confidence: 99%
“…During the ensuing DS phase, the fully trained DNN is made to synthesize one or more new images that capture the statistics of the training images. The specific method we used ( Gatys et al, 2015 , 2017 ; Wallis et al, 2017 ) further receives one additional “sample” image and synthesizes counterpart images, i.e., images that capture the statistics of the sample image specifically, rather than the general statistics of all images in the training set.…”
Section: Methodsmentioning
confidence: 99%
“…The DS method we used is based on previous work by others ( Gatys et al, 2015 , 2017 ; Wallis et al, 2017 ). In this method, the fully trained DNN was presented with a series of additional, actual PVMs (“original mammograms”).…”
Section: Methodsmentioning
confidence: 99%
“…As results, it represents all levels of complexity of the hyperspectral data within a single model. For example, Gatys et al introduced the concept of a nonlinear filter bank using convolutional neural networks to describe texture in images (Gatys et al, 2015;Wallis et al, 2017).…”
Section: Accepted Manuscriptmentioning
confidence: 99%