2021
DOI: 10.1007/978-3-030-87237-3_25
|View full text |Cite
|
Sign up to set email alerts
|

Structure-Preserving Multi-domain Stain Color Augmentation Using Style-Transfer with Disentangled Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(13 citation statements)
references
References 16 publications
0
13
0
Order By: Relevance
“…Several classes of color normalization attempt to address this issue: (1) template color matching maps summary statistics or histograms of RGB values in a reference image to those in a reference image (Reinhard et al 2001; Kothari et al 2011; Y.-Y. Wang et al 2007; Magee et al 2009; Janowczyk, Basavanhally, and Madabhushi 2017); (2) color deconvolution (Ruifrok and Johnston 2001) represents each of the H&E dyes as a “stain vector” and substitutes a reference stain vector for the corresponding target vector (Macenko et al 2009; Rabinovich, A. and Agarwal, S. and Laris, C. A. and Price, J. H. and Belongie, S. n.d.; Trahearn et al 2015; Magee et al 2009; Khan et al 2014; Shafiei et al 2020; Salvi, Michielli, and Molinari 2020; Zheng et al 2019; Vahadane et al 2016); and (3) generative adversarial network (GAN) approaches transfer strain distributions from a reference dataset (rather than single image) to a target dataset (Bentaieb and Hamarneh 2018; Tarek Shaban et al 2018; Wagner et al 2021). Our repository has considerable variability across the dimensions impacting image color, with six sites contributing data generated at two resolutions (20X and 40X) from different scanner models across multiple years.…”
Section: Discussionmentioning
confidence: 99%
“…Several classes of color normalization attempt to address this issue: (1) template color matching maps summary statistics or histograms of RGB values in a reference image to those in a reference image (Reinhard et al 2001; Kothari et al 2011; Y.-Y. Wang et al 2007; Magee et al 2009; Janowczyk, Basavanhally, and Madabhushi 2017); (2) color deconvolution (Ruifrok and Johnston 2001) represents each of the H&E dyes as a “stain vector” and substitutes a reference stain vector for the corresponding target vector (Macenko et al 2009; Rabinovich, A. and Agarwal, S. and Laris, C. A. and Price, J. H. and Belongie, S. n.d.; Trahearn et al 2015; Magee et al 2009; Khan et al 2014; Shafiei et al 2020; Salvi, Michielli, and Molinari 2020; Zheng et al 2019; Vahadane et al 2016); and (3) generative adversarial network (GAN) approaches transfer strain distributions from a reference dataset (rather than single image) to a target dataset (Bentaieb and Hamarneh 2018; Tarek Shaban et al 2018; Wagner et al 2021). Our repository has considerable variability across the dimensions impacting image color, with six sites contributing data generated at two resolutions (20X and 40X) from different scanner models across multiple years.…”
Section: Discussionmentioning
confidence: 99%
“…Previous works on style transfer mainly focus on designing advanced generative adversarial networks (GAN) for improved image diversity and fidelity under the assumption that a large-scale dataset with diverse image styles is available [10,11]. Recent findings show that domain shift is closely related to image style changes across different domains [9,12] and can be alleviated by increasing the diversity of training image styles [9,[13][14][15][16][17]]. One such successful example is MixStyle [9], which generates 'novel' styles via simply linearly mixing style statistics from two arbitrary training instances from the same domain at feature level.…”
Section: Related Workmentioning
confidence: 99%
“…Style transfer has been widely used for image synthesis from one modality to another, e.g., from computed tomography (CT) to magnetic resonance imaging (MRI) [ 6 ], from MRI to CT [ 7 ], from MRI to Positron Emission Tomography (PET) [ 8 ]. It has also been shown to be effective for histological images, e.g., for stain color augmentation [ 9 ]. In terms of style transfer architectures, conditional GAN (cGAN) [ 10 ] can be used for aligned image pairs and its generator can generate images of a certain class.…”
Section: Introductionmentioning
confidence: 99%