Procedings of the British Machine Vision Conference 2009 2009
DOI: 10.5244/c.23.115
|View full text |Cite
|
Sign up to set email alerts
|

Learning generative texture models with extended Fields-of-Experts

Abstract: We evaluate the ability of the popular Field-of-Experts (FoE) to model structure in images. As a test case we focus on modeling synthetic and natural textures. We find that even for modeling single textures, the FoE provides insufficient flexibility to learn good generative models -it does not perform any better than the much simpler Gaussian FoE. We propose an extended version of the FoE (allowing for bimodal potentials) and demonstrate that this novel formulation, when trained with a better approximation of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(30 citation statements)
references
References 17 publications
(34 reference statements)
0
30
0
Order By: Relevance
“…Concerning the learned penalty function (d), as it has local minima at two specific points, it prefers specific image structure, implying that it helps to form certain image structure. We also find that this penalty function is exactly the type of bimodal expert functions for texture synthesis employed in [21].…”
Section: Image Denoising Experimentsmentioning
confidence: 65%
See 1 more Smart Citation
“…Concerning the learned penalty function (d), as it has local minima at two specific points, it prefers specific image structure, implying that it helps to form certain image structure. We also find that this penalty function is exactly the type of bimodal expert functions for texture synthesis employed in [21].…”
Section: Image Denoising Experimentsmentioning
confidence: 65%
“…In practice, (21) is computed using a backward manner starting from the last stage. Now the only thing we need to calculate is ∂ut+1 ∂ut .…”
Section: Joint Trainingmentioning
confidence: 99%
“…Concerning the learned penalty function (d), as it has local minima at two specific points, it prefers specific image structure, implying that it helps to form certain image structure. We also find that this penalty function is exactly the type of bimodal expert functions for texture synthesis employed in [30].…”
Section: Learned Influence Functionsmentioning
confidence: 65%
“…Therefore the interaction structure is learned as filter coefficients. FoE was extended to bimodal FoE (BiFoE) which uses more informative bimodal potentials, and successfully applied to texture modelling by Heess et al [9]; several state of the art generative texture models have been built on BiFoE, some using various configurations of hidden variables. Kivinen and Williams [19] improved on BiFoE by using gated MRFs [20], and Luo et al [21] investigated convolutional deep belief networks (DBN) and spike-and-slab potential functions.…”
Section: High-order Mgrf Modelsmentioning
confidence: 99%
“…Efros & Leung [42] 0.85 ± 0.03 0.86 ± 0.03 0.86 ± 0.06 0.60 ± 0.08 TmPoT [19] 0.86 ± 0.02 0.87 ± 0.01 0.86 ± 0.02 0.77 ± 0.03 TssRBM [21] 0.89 ± 0.02 0.91 ± 0.01 0.92 ± 0.02 0.76 ± 0.03 2-layer DBN [21] 0.89 ± 0.03 0.91 ± 0.02 0.92 ± 0.03 0.77 ± 0.02 cGRBMs [66] 0.91 ± 0.02 0.93 ± 0.01 0.93 ± 0.01 0.78 ± 0.03 GLD 2 0.66 ± 0.15 0.90 ± 0.04 0.86 ± 0.08 0.78 ± 0.03 Linear filters 0.77 ± 0.02 0.49 ± 0.13 0.89 ± 0.04 0.58 ± 0.06 Combined BP 5 0.80 ± 0.08 0.87 ± 0.06 0.92 ± 0.03 0.81 ± 0.02 Conjoined BP 9 0.77 ± 0.10 0.86 ± 0.06 0.90 ± 0.03 0.80 ± 0.03 Jag-star BP 9 0.84 ± 0.08 0.86 ± 0.07 0.91 ± 0.03 0.80 ± 0.03 Jag-star BP 13 0.73 ± 0.11 0.90 ± 0.05 0.91 ± 0.03 0.79 ± 0.03 used filters as features. A stopping rule for the nesting procedure which is robust to the highly variable quality of CSA samples would also be important if more feature sets are used, to keep the number of nesting iterations low.…”
Section: D21 D53 D77mentioning
confidence: 99%