2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.45
|View full text |Cite
|
Sign up to set email alerts
|

Naive Bayes Super-Resolution Forest

Abstract: This paper presents a fast, high-performance method for super resolution with external learning. The first contribution leading to the excellent performance is a bimodal tree for clustering, which successfully exploits the antipodal invariance of the coarse-to-high-res mapping of natural image patches and provides scalability to finer partitions of the underlying coarse patch space. During training an ensemble of such bimodal trees is computed, providing different linearizations of the mapping. The second and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
78
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 117 publications
(78 citation statements)
references
References 28 publications
0
78
0
Order By: Relevance
“…For SISR, we study dictionary and deep learning methods. The dictionary methods are example-based ridge regression (EBSR) [9], sparse coding (ScSR) [77], Naive Bayes SR Interpolation BICUBIC NUISR [74], WNUISR [18] DBRSR [75] Non-blind L1BTV [22], BEPSR [78] reconstruction IRWSR [23] Blind reconstruction BVSR [24], SRB [62] forests (NBSRF) [10], and adjusted anchored neighborhood regression (A+) [12]. The deep learning methods comprise CNNs (SRCNN) [13], very deep networks (VDSR) [14], and deeply-recursive networks (DRCN) [15].…”
Section: Evaluated Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…For SISR, we study dictionary and deep learning methods. The dictionary methods are example-based ridge regression (EBSR) [9], sparse coding (ScSR) [77], Naive Bayes SR Interpolation BICUBIC NUISR [74], WNUISR [18] DBRSR [75] Non-blind L1BTV [22], BEPSR [78] reconstruction IRWSR [23] Blind reconstruction BVSR [24], SRB [62] forests (NBSRF) [10], and adjusted anchored neighborhood regression (A+) [12]. The deep learning methods comprise CNNs (SRCNN) [13], very deep networks (VDSR) [14], and deeply-recursive networks (DRCN) [15].…”
Section: Evaluated Algorithmsmentioning
confidence: 99%
“…In SISR, the use of external training data outperforms selfexemplar approaches [8]. Surprisingly, depending on the quality measure, popular deep nets [13], [14], [15] do not clearly outperform classical methods [9], [10], [12], [77]. This contrasts benchmarks on simulated data, where deep nets perform best.…”
Section: Remarks On the Quantitative Studymentioning
confidence: 99%
“…We differentiate between the two very different paradigms of prior-based single-image SR and redundancy-based multi-image SR. Prior-based Single-image SR. The goal of single-image SR is to fill in HR image patterns by leveraging a prior derived either from similar patches in other parts of the input image (self-examplars) [14,21], or from similar image patches from an existing image database [13], or -most commonly -from previously seen training data [25,48,53,54]. Recently the technologies of choice for learning the prior have been (deep) convolutional networks [10,64,46,22,66,49] and generative adversarial networks [31,60,42,61,59].…”
Section: Related Workmentioning
confidence: 99%
“…Other advances within this family of algorithms include the Naive Bayes SR Forest [18], that uses an ensemble of bimodal trees in order to benefit from antipodal invariance. It is also highly competitive in speed thanks to the naive Bayes selection procedure which applies a single regressor per patch, differently from other forest approaches, e.g.…”
Section: B State Of the Artmentioning
confidence: 99%