2018
DOI: 10.1089/adt.2018.859
|View full text |Cite
|
Sign up to set email alerts
|

Know When You Don't Know: A Robust Deep Learning Approach in the Presence of Unknown Phenotypes

Abstract: Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 16 publications
1
7
0
Order By: Relevance
“…Only the variation ratio yielded results slightly worse (0.79 [0.78, 0.81]). Accordingly, our results confirmed previous findings that erroneous predictions correlate with high uncertainties (Dürr, et al, 2018;Leibig, et al, 2017;Nair, et al, 2020). Integrating uncertainty information into aggregation models (Figure 4 (b)-(d) and (f)-(h)) is therefore supposed to improve patient classification and appears reasonable.…”
Section: Test Performance Of the Derived Uncertainty Measuressupporting
confidence: 90%
See 1 more Smart Citation
“…Only the variation ratio yielded results slightly worse (0.79 [0.78, 0.81]). Accordingly, our results confirmed previous findings that erroneous predictions correlate with high uncertainties (Dürr, et al, 2018;Leibig, et al, 2017;Nair, et al, 2020). Integrating uncertainty information into aggregation models (Figure 4 (b)-(d) and (f)-(h)) is therefore supposed to improve patient classification and appears reasonable.…”
Section: Test Performance Of the Derived Uncertainty Measuressupporting
confidence: 90%
“…Moreover, CNNs tend to predict too high probabilities for ambiguous or unknown cases, which are commonly seen in medicine. Reliable uncertainty estimates enable to filter such cases (Dürr, et al, 2018;Leibig, et al, 2017;Nair, et al, 2020), which may subsequently be returned to physicians for additional inspection. In addition, image-level uncertainties can be used to improve the analysis procedure when combining image predictions to a patient diagnosis.…”
Section: Introductionmentioning
confidence: 99%
“…Enabled by the commercialization of various HCA instruments beginning in the late 1990’s, these automated microscopic platforms have greatly expanded the ability to perform cell-based screening by reducing labor time and variability of manual microscopic analysis. The ability of HCA instruments to multiplex data types and associate numerous features to individual cells can easily generate large datasets, which allows for feature analysis on a massive scale utilizing advanced machine learning techniques[2]. Machine learning supports mutable algorithmic analysis of large data sets, without pre-defined data definitions.…”
Section: Introductionmentioning
confidence: 99%
“…the behaviour of the algorithm if presented with non-functional RNA sequences, i.e sequences randomly generated by shuffling the initial set and preserving the di-nucleotide composition of each original sequence, or with uncertain sequences. Recently, it has been shown that excluding uncertain samples from test set can drastically improve model performance [23,24]. To this aim, we adopted Monte Carlo Dropout to estimate the classification uncertainty of a test sample and decide whether to reject or not the sample.…”
Section: Experiments Setupmentioning
confidence: 99%
“…. ., C. From such a distribution the uncertainty of classification can be estimated in different ways [23,24].…”
Section: Experiments Setupmentioning
confidence: 99%