2020
DOI: 10.3390/app10186427
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled Autoencoder for Cross-Stain Feature Extraction in Pathology Image Analysis

Abstract: A novel deep autoencoder architecture is proposed for the analysis of histopathology images. Its purpose is to produce a disentangled latent representation in which the structure and colour information are confined to different subspaces so that stain-independent models may be learned. For this, we introduce two constraints on the representation which are implemented as a classifier and an adversarial discriminator. We show how they can be used for learning a latent representation across haematoxylin-eosin and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…Subsequently, numerous researchers (Adhikari et al, 2019;Badrinarayanan et al, 2017;Kolhar and Jagtap, 2021;Milioto et al, 2018;Peng et al, 2019) used convolutional encoder-decoder networks for semantic segmentation. The Visual Geometry Group (VGG) (Simonyan and Zisserman, 2015), ResNet (He et al, 2015), and InceptionV3 (Szegedy et al, 2015) networks achieve top-5 accuracy of 92.7%, 93.3%, and 93.9%, respectively on ImageNet dataset (Deng et al, 2009), which proves that these networks have good features extraction ability and often (Gao et al, 2020;Hecht et al, 2020;Majeed et al, 2018;Ou et al, 2019;Panda et al, 2022;Shah et al, 2022;Zou et al, 2021) used as a backbone in various CNN architectures developed for semantic segmentation. Therefore, the performance of the proposed models in this study was compared to the above-mentioned models while having these networks as an encoder backbone.…”
Section: Comparison With the State-of-the-art Networkmentioning
confidence: 93%
“…Subsequently, numerous researchers (Adhikari et al, 2019;Badrinarayanan et al, 2017;Kolhar and Jagtap, 2021;Milioto et al, 2018;Peng et al, 2019) used convolutional encoder-decoder networks for semantic segmentation. The Visual Geometry Group (VGG) (Simonyan and Zisserman, 2015), ResNet (He et al, 2015), and InceptionV3 (Szegedy et al, 2015) networks achieve top-5 accuracy of 92.7%, 93.3%, and 93.9%, respectively on ImageNet dataset (Deng et al, 2009), which proves that these networks have good features extraction ability and often (Gao et al, 2020;Hecht et al, 2020;Majeed et al, 2018;Ou et al, 2019;Panda et al, 2022;Shah et al, 2022;Zou et al, 2021) used as a backbone in various CNN architectures developed for semantic segmentation. Therefore, the performance of the proposed models in this study was compared to the above-mentioned models while having these networks as an encoder backbone.…”
Section: Comparison With the State-of-the-art Networkmentioning
confidence: 93%
“…When there is a lack of data, autoencoders can also produce generated data for data augmenting. Autoencoders offer potential in drug development since they may be used to analyse molecular information for identifying possible therapeutic targets (Astaraki et al, 2022; Hecht et al, 2020; Lomacenkova & Arandjelović, 2021; Uzunova et al, 2019).…”
Section: Applications Of Deep and Machine Learning In Medical Fieldsmentioning
confidence: 99%
“…Then, for each slide, one or more FRs are selected, based on traits derived from the slide review process (e.g., the presence of tumor), and used as masks for the identification of image areas for the extraction of small image subregions with a fixed pixel resolution at a fixed magnification level (patches). After the extraction, the patches are filtered to exclude To extract feature vectors from the patches, the study protocol envisages the use of Variational Autoencoders (VAs), a class of Deep Neural Networks consisting of two main blocks of networks: an encoder and a decoder (30). These are designed to: (i) perform the encoding of the input data into a lower dimensional embedding, and (ii) reconstruct the input from the lower dimensional space.…”
Section: Computational Histopathologymentioning
confidence: 99%