With the assistance of machine learning, difficult tasks can be completed entirely on their own. In a smart grid (SG), computers and mobile devices may make it easier to control the interior temperature, monitor security, and perform routine maintenance. The Internet of Things (IoT) is used to connect the various components of smart buildings. As the IoT concept spreads, SGs are being integrated into larger networks. The IoT is an important part of SGs because it provides services that improve everyone’s lives. It has been established that the current life support systems are safe and effective at sustaining life. The primary goal of this research is to determine the motivation for IoT device installation in smart buildings and the grid. From this vantage point, the infrastructure that supports IoT devices and the components that comprise them is critical. The remote configuration of smart grid monitoring systems can improve the security and comfort of building occupants. Sensors are required to operate and monitor everything from consumer electronics to SGs. Network-connected devices should consume less energy and be remotely monitorable. The authors’ goal is to aid in the development of solutions based on AI, IoT, and SGs. Furthermore, the authors investigate networking, machine intelligence, and SG. Finally, we examine research on SG and IoT. Several IoT platform components are subject to debate. The first section of this paper discusses the most common machine learning methods for forecasting building energy demand. The authors then discuss IoT and how it works, in addition to the SG and smart meters, which are required for receiving real-time energy data. Then, we investigate how the various SG, IoT, and ML components integrate and operate using a simple architecture with layers organized into entities that communicate with one another via connections.
His research interest includes advanced wireless communication, image and signal processing.
More adaptable and user-independent techniques are required for multi-sensors based daily locomotion detection (MS-DLD). This research study proposes a couple of locomotion detection methods using body-worn multi-sensors to successfully categorize several locomotion transitions, including falling, walking, jogging, and jumping, along with bodyspecific sensors based on the modified hidden Markov models (HMMs) approach. This research presents both standard and state-of-the-art methods for MS-DLD. Conventionally, to improve MS-DLD process, the proposed methodology consists of a wavelet transformed Quaternion-based filter for the inertial signals, patterns recognition in the form of kinematic-static energies, and state-of-the-art multi-features extraction. These features include entropy, spectral, and cepstral coefficients domains. Then, fuzzy logic-based optimization has been introduced in order to achieve the selective features by converting them into codewords. This paper also introduces another state-of-the-art way to model daily locomotion detection and derives body-specific modified HMMs. The model divides the sensor data into three active body-specific parts including head sensors, mid-body sensors, and lower body sensors. Body-specific modified HMMs have been provided with raw data for the three active body-specific sensors and gave better results with less computational complexities when compared to the conventional methods. The proposed systems have been experimentally assessed and trialed over three diverse publicly available datasets: the UP-Fall dataset consisting of falling and other daily life activities, the IM-WSHA dataset comprising everyday locomotion actions, and the ENABL3S gait and locomotion dataset consisting of multiple gait movements. Experimental outcomes indicate that the proposed conventional technique achieved improved results and outperformed existing systems based on detection accuracies of 90.0% and 87.5% over UP-Fall, 86.0% and 88.3% over IM-WSHA, and 86.7% and 90.0% over ENABL3S datasets for kinematic and static energy patterns, respectively. Further, the results show that the state-of-the-art body-specific modified HMMs method achieved 94.3% and 95.0% over UP-Fall, 92.0% and 93.3% over IM-WSHA, and 90.0% and 95.0% over ENABL3S datasets for kinematic and static patterned signals, respectively. The results of state-of-theart efficient system show a significant increase in detection accuracy when compared to standard systems.
Due to the recently increased requirements of e-learning systems, multiple educational institutes such as kindergarten have transformed their learning towards virtual education. Automated student health exercise is a difficult task but an important one due to the physical education needs especially in young learners. The proposed system focuses on the necessary implementation of student health exercise recognition (SHER) using a modified Quaternion-based filter for inertial data refining and data fusion as the pre-processing steps. Further, cleansed data has been segmented using an overlapping windowing approach followed by patterns identification in the form of static and kinematic signal patterns. Furthermore, these patterns have been utilized to extract cues for both patterned signals, which are further optimized using Fisher's linear discriminant analysis (FLDA) technique. Finally, the physical exercise activities have been categorized using extended Kalman filter (EKF)-based neural networks. This system can be implemented in multiple educational establishments including intelligent training systems, virtual mentors, smart simulations, and interactive learning management methods.
In this research work, an efficient sign language recognition tool for e-learning has been proposed with a new type of feature set based on angle and lines. This feature set has the ability to increase the overall performance of machine learning algorithms in an efficient way. The hand gesture recognition based on these features has been implemented for usage in real-time. The feature set used hand landmarks, which were generated using media-pipe (MediaPipe) and open computer vision (openCV) on each frame of the incoming video. The overall algorithm has been tested on two well-known ASLalphabet (American Sign Language) and ISL-HS (Irish Sign Language) sign language datasets. Different machine learning classifiers including random forest, decision tree, and naïve Bayesian have been used to classify hand gestures using this unique feature set and their respective results have been compared. Since the random forest classifier performed better, it has been selected as the base classifier for the proposed system. It showed 96.7% accuracy with ISL-HS and 93.7% accuracy with ASL-alphabet dataset using the extracted features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.