2022
DOI: 10.1155/2022/7088137
|View full text |Cite
|
Sign up to set email alerts
|

Medical Image Compression Based on Variational Autoencoder

Abstract: With the rapid growth of medical image data, it has become a current research hotspot that how to realize the large amounts of the real-time upload and storage of medical images with limited network bandwidth and storage space. However, currently, medical image compression technology cannot perform joint optimization of rate (the degree of compression) and distortion (reconstruction effect). Therefore, this study proposed a medical image compression algorithm based on a variational autoencoder. This algorithm … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…Liu et al [11] devised a medical image compression technique that effectively balances compression and reconstruction. By leveraging a residual network module and a variational autoencoder, they concurrently optimize the rate and distortion.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al [11] devised a medical image compression technique that effectively balances compression and reconstruction. By leveraging a residual network module and a variational autoencoder, they concurrently optimize the rate and distortion.…”
Section: Related Workmentioning
confidence: 99%
“…We also include the latent representation from a β-Variational Autoencoder (β-VAE) [15] as a more sophisticated baseline. Deep autoencoder models have been explored as tools to learn better image and video compression algorithms for technological applications [16,17], as well as to model human visual memory [18][19][20]. In addition to baseline models, we consider networks trained on the ILSVRC ImageNet classification challenge (both the 1,000-way and 22,000-way versions) and networks trained on the Contrastive Language-Image Pre-training (CLIP) objective [21].…”
Section: Continuous Report With Natural Imagesmentioning
confidence: 99%
“…With the image dimension compression or reduction process, the resulting DNA sequence will have a much smaller number compared to images that are not compressed. Image compression using a generative model has been carried out by Liu et al [55] with predetermined input images.…”
Section: Rq5 How Can Generative Models Be Implemented In Compression ...mentioning
confidence: 99%