2023
DOI: 10.1016/j.kjs.2023.05.004
|View full text |Cite
|
Sign up to set email alerts
|

On Shannon entropy and its applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…In the 1940s, Claude Shannon introduced the concept of information entropy by integrating statistical thermodynamics, and provided a mathematical expression for calculating specific information entropy, thereby addressing the quantification issue of information. [28] Information entropy, also known as Shannon entropy, is commonly described using probability theory to measure the uncertainty in a system.…”
Section: Information Entropymentioning
confidence: 99%
“…In the 1940s, Claude Shannon introduced the concept of information entropy by integrating statistical thermodynamics, and provided a mathematical expression for calculating specific information entropy, thereby addressing the quantification issue of information. [28] Information entropy, also known as Shannon entropy, is commonly described using probability theory to measure the uncertainty in a system.…”
Section: Information Entropymentioning
confidence: 99%
“…The approach presented in this study seeks to automatically identify the optimal neighborhood radius using the entropy feature (E f ), which characterizes data distribution and provides insight into its structure. Entropy and data volume are closely linked: larger datasets exhibit greater disorder and entropy, whereas smaller datasets offer fewer choices, resulting in reduced entropy [83]. The aim here is to utilize this feature within a neighborhood to identify the optimal radius that enhances differentiation among the three primary geometric patterns (linearity, planarity, and scatter).…”
Section: Neighborhood Scalementioning
confidence: 99%
“…The hyper-parameters (a, b) are selected for Bayesian computations of the parameter λ and Shannon's entropy in such a manner that the prior mean is precisely identical to the true values of the parameter, i.e., λ = a/b. Specifically, we consider (a, b) = (3, 4) and (3,2) for λ = 0.75 and 1.5, respectively. When computing Bayes estimators under the LINEX loss function, we consider the loss function parameter c = −0.5 and 0.5.…”
Section: Monte Carlo Simulation Studymentioning
confidence: 99%
“…The HPD credible intervals have smaller simulated average lengths than those frequentist confidence intervals. (3,20,8) [1] 0.8372 0.0189 0.8066 0.0174 0.7991 0.0181 0.8329 0.0024 0.8298 0.0025 [2] 0.8371 0.0185 0.8074 0.0171 0.8000 0.0178 0.8333 0.0024 0.8302 0.0025 [3] 0.8371 0.0172 0.8101 0.0160 0.8031 0.0166 0.8342 0.0023 0.8314 0.0024 (3,20,16) [4] 0.8475 0.0091 0.8305 0.0087 0.8263 0.0089 0.8413 0.0016 0.8394 0.0017 [5] 0.8475 0.0091 0.8305 0.0087 0.8263 0.0089 0.8413 0.0016 0.8394 0.0017 [6] 0.8475 0.0088 0.8312 0.0084 0.8272 0.0086 0.8416 0.0016 0.8398 0.0016 (3,50,20) [7] 0.8485 0.0074 0.8348 0.0071 0.8314 0.0073 0.8433 0.0014 0.8418 0.0014 [8] 0.8485 0.0072 0.8353 0.0070 0.8320 0.0071 0.8435 0.0014 0.8420 0.0014 [9] 0.8483 0.0066 0.8366 0.0064 0.8336 0.0065 0.8442 0.0013 0.8428 0.0013 (3,50,40) [10] 0.8496 0.0039 0.8425 0.0039 0.8407 0.0039 0.8472 0.0009 0.8463 0.0009 [11] 0.8496 0.0039 0.8425 0.0039 0.8407 0.0039 0.8472 0.0009 0.8463 0.0009 [12] 0.8495 0.0038 0.8428 0.0037 0.8411 0.0038 0.8474 0.0008 0.8465 0.0008 (5,20,8) [1] 0.8371 0.0180 0.8505 0.0124 0.8011 0.0174 0.8483 0.0023 0.8306 0.0025 [2] 0.8370 0.0177 0.8504 0.0122 0.8020 0.0171 0.8484 0.0023 0.8310 0.0024 [3] 0.8369 0.0166 0.8503 0.0117 0.8046 0.0161 0.8485 0.0022 0.8319 0.0023 (5,20,16) [4] 0.8474 0.0087 0.8536 0.0072 0.8276 0.0085 0.8511 0.0016 0.8400 0.0016 [5] 0.8474 0.0086 0.8536 0.0072 0.8276 0.0085 0.8511 0.0016 0.8400 0.0016 [6] 0.8474 0.0084 0.8536 0.0070 0.8283 0.0082 0.8511 0.0015 0.8403 0.0016 (5,50,20) [7] 0.8485 0.0070 0.8535 0.0061 0.8325 0.0069 0.8515 0.0013 0.8422 0.0014 [8] 0.8484 0.0069 0.8535 0.0060 0.8329 0.0068 0.8515 0.0013 0...…”
Section: Monte Carlo Simulation Studymentioning
confidence: 99%
See 1 more Smart Citation