2020
DOI: 10.1038/s42256-020-0188-z
|View full text |Cite|
|
Sign up to set email alerts
|

Augmenting vascular disease diagnosis by vasculature-aware unsupervised learning

Abstract: Vascular diseases are among the leading causes of death and threaten human health worldwide.Imaging examination of vascular pathology with reduced invasiveness is challenging due to the intrinsic vasculature complexity and the non-uniform scattering from bio-tissues. Here, we report VasNet, a vasculature-aware unsupervised learning algorithm that augments pathovascular recognition from small sets of unlabeled fluorescence and digital subtraction angiography (DSA) images. The VasNet adopts the multi-scale fusio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 54 publications
0
8
0
Order By: Relevance
“…Frameworks that can optimize DNNs without strict reliance on large and high-quality annotated training datasets can significantly narrow the gap between research and clinical Recent deep-learning works have presented encouraging performance in handling certain types of imperfect datasets in isolation [41][42][43][44][45] but have not yet shown general applicability by addressing all three types. Methods such as data augmentation 41,46 , transfer learning 42,43 , semi-supervised learning 44,47 , and selfsupervised learning 48,49 have been extensively investigated to handle cases with limited training annotations or no target domain annotations. By contrast, much less attention has been given to noisy label learning in medical imaging 16,50 .…”
Section: Discussionmentioning
confidence: 99%
“…Frameworks that can optimize DNNs without strict reliance on large and high-quality annotated training datasets can significantly narrow the gap between research and clinical Recent deep-learning works have presented encouraging performance in handling certain types of imperfect datasets in isolation [41][42][43][44][45] but have not yet shown general applicability by addressing all three types. Methods such as data augmentation 41,46 , transfer learning 42,43 , semi-supervised learning 44,47 , and selfsupervised learning 48,49 have been extensively investigated to handle cases with limited training annotations or no target domain annotations. By contrast, much less attention has been given to noisy label learning in medical imaging 16,50 .…”
Section: Discussionmentioning
confidence: 99%
“…We have shown that two possible ways to obtain such auxiliary masks: using a parametric shape model to generate a set of auxiliary masks for simple structures such as optic disc and fetal head, and taking advantage of masks of the object from another domain (e.g., public datasets) for complex structures such as the lung and the liver. For more complex structures such as the brain and vessels [71], it might be more challenging to leverage existing unpaired labels from a different dataset for shape constraint. The effectiveness of our method in such cases will be investigated in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning has revolutionized fields such as computer vision and natural language processing. 19 In the chemical and biological sciences, artificial intelligence is reshaping research methodologies for applications such as chemical exploration, 20,21 prediction of chemical reactivity 22 and reaction performance, 23 prediction and design of synthetic 24,25 and retrosynthetic routes, 26 automated reaction optimization, 27 accelerated materials discovery, [28][29][30] materials characterization, 31 label-free cell classification, 32,33 detection of DNA modifications, 34 study of oncogenic differentiation, 35 prediction of protein structure, 36 and diagnosis of medical images, 37,38 providing scientists with powerful tools for feature extraction, classification, and prediction.…”
Section: Progress and Potentialmentioning
confidence: 99%