2021
DOI: 10.1016/j.irbm.2021.06.006
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Using Havrda-Charvat Entropy for Classification of Pulmonary Optical Endomicroscopy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…Lebesgue measure for continuous case) by a Radon-Nykodim derivative between probability measures. Shannon's entropy can be generalized on other entropies such as Renyi [15] and Tsallis-Havrda-Charvat [16,17]. In this paper, we are interested in a particular generalization of Shannon cross-entropy: Tsallis-Havrda-Charvat cross-entropy [18].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Lebesgue measure for continuous case) by a Radon-Nykodim derivative between probability measures. Shannon's entropy can be generalized on other entropies such as Renyi [15] and Tsallis-Havrda-Charvat [16,17]. In this paper, we are interested in a particular generalization of Shannon cross-entropy: Tsallis-Havrda-Charvat cross-entropy [18].…”
Section: Introductionmentioning
confidence: 99%
“…[24], the maximization of the entropy measure is studied for different classes of entropies such as Tsallis-Havrda-Charvat; the maximization of Tsallis-Havrda-Charvat entropy under constraints appears to be a way to generalize Gaussian distributions. In our previous work [17], Tsallis-Havrda-Charvat cross-entropy is used for the detection of noisy images in pulmonary microendoscopy. To capitalize and improve on this previous work, we propose to use deep learning in this paper.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Entropy is a measurable physical quality most usually linked with disorder, unpredictability, or uncertainty(E.-W. Huang et al, 2022).The smallest average encoding size per transmission with which a source can efficiently convey a message to a destination without losing any data is defined as entropy (Janik, 2019).The difference between two probability distributions for a given random variable or set of occurrences is measured by cross-entropy. (Brochet, Lapuyade-Lahorgue, Bougleux, Salaün, & Ruan, 2021;Gordon-Rodriguez, Loaiza-Ganem, Pleiss, & Cunningham, 2020;Pacheco, Ali, & Trappenberg, 2019;Ruby & Yendapalli, 2020;Z. Z. Wang & Goh, 2022).…”
Section: Overfitting and Underfitting In ML Modelsmentioning
confidence: 99%
“…As a loss function, cross-entropy is extensively employed in ML [73] . Each example has a known class label with a probability of 1.0, whereas all other labels have a probability of 0.0 in classification [74] . In this case, the model determines the probability that a given example corresponds to each class label [75] .…”
Section: Cross-entropy and Cross-validationmentioning
confidence: 99%