2023
DOI: 10.1016/j.optlaseng.2023.107503
|View full text |Cite
|
Sign up to set email alerts
|

Resolution and contrast enhancement in weighted subtraction microscopy by deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…Spatial unwrapping techniques such as minimum norm, quality guided methods etc have limitations in handling the phase discontinuities and disjoint regions whereas temporal techniques requires multi frames or multi frequency fringes [5]. Deep learning is gaining attraction in various fields such as microscopy, holography, super resolution imaging, optical image encryption, interferometry, natural language processing (NLP), facial recognition, autonomous vehicles, medical image analysis, drug discovery, disease diagnosis, treatment recommendation, etc [7][8][9][10][11]. Advent of newer and complex architectures because of the availability of necessary hardware such as Graphics Processing Units (GPUs) and Tensor Processing units (TPUs) has improved the performance of deep learning in various fields [12].…”
Section: Introductionmentioning
confidence: 99%
“…Spatial unwrapping techniques such as minimum norm, quality guided methods etc have limitations in handling the phase discontinuities and disjoint regions whereas temporal techniques requires multi frames or multi frequency fringes [5]. Deep learning is gaining attraction in various fields such as microscopy, holography, super resolution imaging, optical image encryption, interferometry, natural language processing (NLP), facial recognition, autonomous vehicles, medical image analysis, drug discovery, disease diagnosis, treatment recommendation, etc [7][8][9][10][11]. Advent of newer and complex architectures because of the availability of necessary hardware such as Graphics Processing Units (GPUs) and Tensor Processing units (TPUs) has improved the performance of deep learning in various fields [12].…”
Section: Introductionmentioning
confidence: 99%