2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01257
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Convolutional Neural Network for Image Compressed Sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
118
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 131 publications
(118 citation statements)
references
References 29 publications
0
118
0
Order By: Relevance
“…In order to estimate the model parameters of the proposed average codeword length model and the relative PSNR model, 100 images in the BSDS500 dataset [35] were randomly selected for training, and the BSD68 dataset [36] was used for testing, each image being cropped to a 256 × 256 size. During training, the quantization bit-depth took eight values in {3, 4, .…”
Section: Model Parameter Estimation For the Bit-rate Model And The Rementioning
confidence: 99%
“…In order to estimate the model parameters of the proposed average codeword length model and the relative PSNR model, 100 images in the BSDS500 dataset [35] were randomly selected for training, and the BSD68 dataset [36] was used for testing, each image being cropped to a 256 × 256 size. During training, the quantization bit-depth took eight values in {3, 4, .…”
Section: Model Parameter Estimation For the Bit-rate Model And The Rementioning
confidence: 99%
“…Specifically, our method outperforms NLR-CS and BM3D-CS by 2.29 dB and 1.95 dB on the Waterloo140 dataset at a sampling ratio of 0.10. We further compare our method with some advanced deepbased CS image reconstruction methods (including ISTA-Net+ [11], CSNet+ [46], SCSNet [15], OPINE-Net+ [47], AMP-Net [13], and LDAMP [18]) on the Set8 and Water-loo140 datasets. Table III provides the PSNR values of the competing methods for every image in Set8, and Table IV lists the average PSNR results of seven classes of Waterloo140.…”
Section: B Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…1, a model-fixed end-to-end approach cannot reconstruct the image accurately once the measurement loss rate increases to some level. SCSNet [15] achieves a scalable CS end-to-end net by using a greedy strategy to search the most important measurement bases. However, SCSNet still needs to update the network parameters for different sampling ratios, and the complexity of greedy searching is no less than that of retraining the model at a high sampling ratio.…”
Section: Introductionmentioning
confidence: 99%
“…(2) will become inefficient and even impractical if the dimension of the dictionary is high or the size of training dataset is very large [13], [16]. Lately, some deep networks are developed to jointly optimizing the sampling matrix and the non-linear recovery operator [17], [18], [19], [45], [20], [46]. In particular, Adler et al propose to utilize a fully-connected network to perform both the block-based linear sensing and non-linear reconstruction stages.…”
Section: Related Workmentioning
confidence: 99%