2020 42nd Annual International Conference of the IEEE Engineering in Medicine &Amp; Biology Society (EMBC) 2020
DOI: 10.1109/embc44109.2020.9176279
|View full text |Cite
|
Sign up to set email alerts
|

Supervision and Source Domain Impact on Representation Learning: A Histopathology Case Study

Abstract: As many algorithms depend on a suitable representation of data, learning unique features is considered a crucial task. Although supervised techniques using deep neural networks have boosted the performance of representation learning, the need for a large sets of labeled data limits the application of such methods. As an example, high-quality delineations of regions of interest in the field of pathology is a tedious and time-consuming task due to the large image dimensions. In this work, we explored the perform… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

4
5

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 16 publications
0
10
0
Order By: Relevance
“…The large CRC dataset includes nine classes of tissues, namely adipose, background, debris, lymphocytes, mucus, smooth muscle, normal colon mucosa (normal), cancer-associated stroma, and colorectal adenocarcinoma epithelium (tumor). Note that literature has shown the effectiveness of triplet variants networks for histopathology data, both with triplet-loss [26] and with NCA loss [27]; this shows the importance of validating our approaches on this domain.…”
Section: Experiments a Datasetsmentioning
confidence: 75%
“…The large CRC dataset includes nine classes of tissues, namely adipose, background, debris, lymphocytes, mucus, smooth muscle, normal colon mucosa (normal), cancer-associated stroma, and colorectal adenocarcinoma epithelium (tumor). Note that literature has shown the effectiveness of triplet variants networks for histopathology data, both with triplet-loss [26] and with NCA loss [27]; this shows the importance of validating our approaches on this domain.…”
Section: Experiments a Datasetsmentioning
confidence: 75%
“…Various loss functions exist for supervised metric learning by neural networks. Supervised loss functions can teach the network to separate classes in the embedding space (Sikaroudi et al, 2020b). For this, we use a network whose last layer is for classification of data points.…”
Section: Supervised Metric Learning By Supervised Loss Functionsmentioning
confidence: 99%
“…where (y i ) l and f o (x i ) l denote the l-th element of y i and f o (x i ), respectively. Minimizing this loss separates classes for classification; this separation of classes also gives us discriminating embedding in the one-to-last layer (Sikaroudi et al, 2020b;Boudiaf et al, 2020).…”
Section: Cross-entropy Lossmentioning
confidence: 99%
“…In the regard of histology image, Medela et al (2019) use a triplet loss (Schroff et al, 2015) to pre-train an encoder with a followed fine-tuned SVM classifier for few-shot domain adaptation. Sikaroudi et al (2020) and Teh & Taylor (2020) study learning with less data in histology images. Concurrent to our work, Shakeri et al (2021) simultaneously propose a benchmark for few-shot classification of histological images.…”
Section: Related Workmentioning
confidence: 99%