2020
DOI: 10.1016/j.neucom.2018.11.106
|View full text |Cite
|
Sign up to set email alerts
|

Multilayer probability extreme learning machine for device-free localization

Abstract: Device-free localization (DFL) is becoming one of the new techniques in wireless localization field, due to its advantage that the target to be localized does not need to attach any electronic device. One of the key issues of DFL is how to characterize the influence of the target on the wireless links, such that the target's location can be accurately estimated by analyzing the changes of the signals of the links. Most of the existing related research works usually extract the useful information from the links… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 32 publications
(9 citation statements)
references
References 57 publications
0
9
0
Order By: Relevance
“…In [57], we stacked ELM-AEs for designing a multilayer probability ELM (MP-ELM). Different from the aforementioned ML-ELMs, MP-ELM outputs the probability of the predicted results belonging to all the classes instead of fitting to data, which can significantly alleviate the effects of accumulated errors on the final predicted results.…”
Section: General Forms Of Stacked Elm-aementioning
confidence: 99%
“…In [57], we stacked ELM-AEs for designing a multilayer probability ELM (MP-ELM). Different from the aforementioned ML-ELMs, MP-ELM outputs the probability of the predicted results belonging to all the classes instead of fitting to data, which can significantly alleviate the effects of accumulated errors on the final predicted results.…”
Section: General Forms Of Stacked Elm-aementioning
confidence: 99%
“…Extreme learning machine (ELM) is a special type of feedforward neural network, in which the input weights and hidden bias are assigned randomly and then remain the same throughout the training process while the output weights are obtained analytically. This non-iterative learning mechanism enables it to achieve much faster training speed than the traditional neural networks in many scenarios [50], [51]. This section briefly reviews the training mechanism of ELM and online sequential ELM (OS-ELM).…”
Section: Review Of Elm and Os-elmmentioning
confidence: 99%
“…Zhang et al [32] proposed an ELM algorithm incorporated with a residual compensation strategy and demonstrated its efficiency in an RSSI-based DFL application. Additionally, probability-based machine learning algorithms, such as multilayer probability ELM (MP-ELM) [33], are also proposed to implement a DFL application. Gao et al [34] utilized an ELM Ensemble together with Principal Component Analysis (PCA) to implement a DFL application.…”
Section: Related Workmentioning
confidence: 99%