2022
DOI: 10.1109/tnnls.2021.3056188
|View full text |Cite
|
Sign up to set email alerts
|

Weighted Error Entropy-Based Information Theoretic Learning for Robust Subspace Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 52 publications
0
10
0
Order By: Relevance
“…μ WiFi for WiFi AP 11,13,14,16,12, and 15 are set to 2.39 and 1.43 (ms), respectively. These average packet arrival intervals correspond to LTE subframe numbers 3 and 5, respectively.…”
Section: Existing Problems and The Enhanced Proposed Schemementioning
confidence: 99%
See 2 more Smart Citations
“…μ WiFi for WiFi AP 11,13,14,16,12, and 15 are set to 2.39 and 1.43 (ms), respectively. These average packet arrival intervals correspond to LTE subframe numbers 3 and 5, respectively.…”
Section: Existing Problems and The Enhanced Proposed Schemementioning
confidence: 99%
“…To solve this problem, we further propose an enhanced joint channel/subframe number selection scheme to deal with this undesired frequent channel switches. In the enhanced proposed scheme, after the action selection probability is derived based on Equation (5), if the channel switch happens, the Q value is further updated by Equation (12).…”
Section: Existing Problems and The Enhanced Proposed Schemementioning
confidence: 99%
See 1 more Smart Citation
“…ey have developed advanced smart service systems and platforms and built an ecological smart logistics supply chain. Some intelligent algorithms such as machine learning methods have been developed [1][2][3].…”
Section: Introductionmentioning
confidence: 99%
“…However, DTL usually fails to reduce the requirement of parameters, which results in too large models to deploy on low-computing-power hardware systems. For (2), the network structure is optimized [24,25], and particular structures of small efficient CNNs are usually designed with less computation and small volume that is easy to popularize in application. Rahman [26] proposes a twostage small CNN architecture, which reduced the model size by 99% compared to VGG-16 while remaining an accuracy of 93.3%.…”
mentioning
confidence: 99%