2022
DOI: 10.3390/info13030140
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of Biological Tissue Denaturation Based on Improved Multiscale Permutation Entropy and GK Fuzzy Clustering

Abstract: Recognition of biological tissue denaturation is a vital work in high-intensity focused ultrasound (HIFU) therapy. Multiscale permutation entropy (MPE) is a nonlinear signal processing method for feature extraction, widely applied to the recognition of biological tissue denaturation. However, the typical MPE cannot derive a stable entropy due to intensity information loss during the coarse-graining process. For this problem, an improved multiscale permutation entropy (IMPE) is proposed in this work. IMPE is ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…The permutation entropy can only detect the complexity and randomness of the time series on a single scale. Moreover, the output time series of complex systems contain characteristic information on multiple scales when the permutation entropy analysis is no longer satisfied [23]. In order to study the multi-scale complexity variation of a time series, multi-scale permutation entropy is proposed.…”
Section: Multi-scale Permutation Entropymentioning
confidence: 99%
See 1 more Smart Citation
“…The permutation entropy can only detect the complexity and randomness of the time series on a single scale. Moreover, the output time series of complex systems contain characteristic information on multiple scales when the permutation entropy analysis is no longer satisfied [23]. In order to study the multi-scale complexity variation of a time series, multi-scale permutation entropy is proposed.…”
Section: Multi-scale Permutation Entropymentioning
confidence: 99%
“…(3) The feature map f is convolved with a convolution kernel of 1 × 1 according to the original height and width to obtain the feature maps F h and F w , with the same number of channels as the original one. The attention weights g h for the feature maps in the height and width and g w in the width direction are obtained after the Sigmoid activation function, as shown in Equation (23).…”
Section: Residualmentioning
confidence: 99%