Traditional indoor human activity recognition (HAR) is a timeseries data classification problem and needs feature extraction. Presently, considerable attention has been given to the domain of HAR due to the enormous amount of its real-time uses in real-time applications, namely surveillance by authorities, biometric user identification, and health monitoring of older people. The extensive usage of the Internet of Things (IoT) and wearable sensor devices has made the topic of HAR a vital subject in ubiquitous and mobile computing. The more commonly utilized inference and problemsolving technique in the HAR system have recently been deep learning (DL). The study develops a Modified Wild Horse Optimization with DL Aided Symmetric Human Activity Recognition (MWHODL-SHAR) model. The major intention of the MWHODL-SHAR model lies in recognition of symmetric activities, namely jogging, walking, standing, sitting, etc. In the presented MWHODL-SHAR technique, the human activities data is pre-processed in various stages to make it compatible for further processing. A convolution neural network with an attention-based long short-term memory (CNN-ALSTM) model is applied for activity recognition. The MWHO algorithm is utilized as a hyperparameter tuning strategy to improve the detection rate of the CNN-ALSTM algorithm. The experimental validation of the MWHODL-SHAR technique is simulated using a benchmark dataset. An extensive comparison study revealed the betterment of the MWHODL-SHAR technique over other recent approaches.
Solar energy will be a great alternative to fossil fuels since it is clean and renewable. The photovoltaic (PV) mechanism produces sunbeams' green energy without noise or pollution. The PV mechanism seems simple, seldom malfunctioning, and easy to install. PV energy productivity significantly contributes to smart grids through many small PV mechanisms. Precise solar radiation (SR) prediction could substantially reduce the impact and cost relating to the advancement of solar energy. In recent times, several SR predictive mechanism was formulated, namely artificial neural network (ANN), autoregressive moving average, and support vector machine (SVM). Therefore, this article develops an optimal Modified Bidirectional Gated Recurrent Unit Driven Solar Radiation Prediction (OMBGRU-SRP) for energy management. The presented OMBGRU-SRP technique mainly aims to accomplish an accurate and time SR prediction process. To accomplish this, the presented OMBGRU-SRP technique performs data preprocessing to normalize the solar data. Next, the MBGRU model is derived using BGRU with an attention mechanism and skip connections. At last, the hyperparameter tuning of the MBGRU model is carried out using the satin bowerbird optimization (SBO) algorithm to attain maximum prediction with minimum error values. The SBO algorithm is an intelligent optimization algorithm that simulates the breeding behavior of an adult male Satin Bowerbird in the wild. Many experiments were conducted to demonstrate the enhanced SR prediction performance. The experimental values highlighted the supremacy of the OMBGRU-SRP algorithm over other existing models.
The term "Real-Time Operating System (RTOS)" refers to systems wherein the time component is critical. For example, one or more of a computer's peripheral devices send a signal, and the computer must respond appropriately within a specified period of time. Examples include: the monitoring system in a hospital care unit, the autopilot in the aircraft, and the safety control system in the nuclear reactor. Scheduling is a method that ensures that jobs are performed at certain times. In the real-time systems, accuracy does not only rely on the outcomes of calculation, and also on the time it takes to provide the results. It must be completed within the specified time frame. The scheduling strategy is crucial in any real-time system, which is required to prevent overlapping execution in the system. The paper review classifies several previews works on many characteristics. Also, strategies utilized for scheduling in real time are examined and their features compared.
Mobile malware is malicious software that targets mobile phones or wireless-enabled Personal digital assistants (PDA), by causing the collapse of the system and loss or leakage of confidential information. As wireless phones and PDA networks have become more and more common and have grown in complexity, it has become increasingly difficult to ensure their safety and security against electronic attacks in the form of viruses or other malware. Android is now the world's most popular OS. More and more malware assaults are taking place in Android applications. Many security detection techniques based on Android Apps are now available. Android applications are developing rapidly across the mobile ecosystem, but Android malware is also emerging in an endless stream. Many researchers have studied the problem of Android malware detection and have put forward theories and methods from different perspectives. Existing research suggests that machine learning is an effective and promising way to detect Android malware. Notwithstanding, there exist reviews that have surveyed different issues related to Android malware detection based on machine learning. The open environmental feature of the Android environment has given Android an extensive appeal in recent years. The growing number of mobile devices, they are incorporated in many aspects of our everyday lives. In today’s digital world most of the anti-malware tools are signature based which is ineffective to detect advanced unknown malware viz. Android OS, which is the most prevalent operating system (OS), has enjoyed immense popularity for smart phones over the past few years. Seizing this opportunity, cybercrime will occur in the form of piracy and malware. Traditional detection does not suffice to combat newly created advanced malware. So, there is a need for smart malware detection systems to reduce malicious activities risk. The present paper includes a thorough comparison that summarizes and analyses the various detection techniques.
The use of technology has grown dramatically, and computer systems are now interconnected via various communication mediums. The use of distributed systems (DS) in our daily activities has only gotten better with data distributions. This is due to the fact that distributed systems allow nodes to arrange and share their resources across linked systems or devices, allowing humans to be integrated with geographically spread computer capacity. Due to multiple system failures at multiple failure points, distributed systems may result in a lack of service availability. to avoid multiple system failures at multiple failure points by using fault tolerance (FT) techniques in distributed systems to ensure replication, high redundancy, and high availability of distributed services. In this paper shows ease fault tolerance systems, its requirements, and explain about distributed system. Also, discuss distributed system architecture; furthermore, explain used techniques of fault tolerance, in additional that review some recent literature on fault tolerance in distributed systems and finally, discuss and compare the fault tolerance literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.