2020
DOI: 10.1007/978-3-030-34328-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Energy Efficient ACPI and JEHDO Mechanism for IoT Device Energy Management in Healthcare

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 7 publications
0
16
0
Order By: Relevance
“…Generally, two context layers has been used in RNN: the Elman context layer and the Jordan context layer, both with some alterations from the original Elman and Jordan recurring neural networks. The Elman context layer varies from the novel Elman RNN because the two setting neurons attain inputs from the output of the hidden layer afterward a postponement of one time unit, and as of itself (Amit Kumar, 2018;Azhagesan, 2018;Raveendra, 2019;Balaji, 2019;Sampathkumar, 2020;Jayanthiladevi, 2018). In the Jordan background layer the change is that setting neurons accomplish contributions from the yield mistake of the system after a postponement of one time unit and from itself There are two neurons with self-inputs in both setting layers.…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…Generally, two context layers has been used in RNN: the Elman context layer and the Jordan context layer, both with some alterations from the original Elman and Jordan recurring neural networks. The Elman context layer varies from the novel Elman RNN because the two setting neurons attain inputs from the output of the hidden layer afterward a postponement of one time unit, and as of itself (Amit Kumar, 2018;Azhagesan, 2018;Raveendra, 2019;Balaji, 2019;Sampathkumar, 2020;Jayanthiladevi, 2018). In the Jordan background layer the change is that setting neurons accomplish contributions from the yield mistake of the system after a postponement of one time unit and from itself There are two neurons with self-inputs in both setting layers.…”
Section: Recurrent Neural Networkmentioning
confidence: 99%
“…The presentation of the framework is reliant on the great impedance coordinate between the feeder and receiving a wire. It is conceivable to differ the feed impedance of Yagi-Uda reception apparatus by modifying the space between the components [7]. In any case, it is continuously unrealistic to modify the separation between the components.…”
Section: Rf Signal Based Harvesting Systemmentioning
confidence: 99%
“…We use machine learning to create a blacklist of the subject, constructing a decision tree that is designed for multiple distributed results. We also used Multinomial NB (the method implementing this algorithm) as the Scikit-Learn Python Library [17]. This algorithm produces a pre-setting label with any single pair that generates a predictive rating of trust.…”
Section: Proposed Systemmentioning
confidence: 99%