In recent times, visible light communication is an emerging technology that supports high speed data communication for wireless communication systems. However, the performance of the visible light communication system is impaired by inter symbol interference, the time dispersive nature of the channel, and nonlinear features of the light emitting diode that significantly reduces the bit error rate performance. To address these problems, many environments offer a rich infrastructure of light sources for end-to-end communication. In this research paper, an effective routing protocol named the modified grasshopper optimization algorithm is proposed to reduce communication interruptions, and to provide alternative routes in the network without the need of previous topology knowledge. In this research paper, the proposed routing protocol is implemented and analyzed using the MATLAB environment. The experimental result showed that the proposed routing protocol adapts to dynamic changes in the communication networks, like obstacles and shadows. Hence, the proposed protocol achieved better performance in data transmission in terms of throughput, packet delivery ratio, end-to-end delay, and routing overhead. In addition, the performance is analyzed by varying the number of nodes like 50, 100, 250, and 500. From the experimental analysis, the proposed routing protocol achieved maximum of 16.69% and minimum of 2.20% improvement in packet delivery ratio, and minimized 0.80 milliseconds of end-to-end delay compared to the existing optimization algorithms.
In recent times, Wireless Sensor Networks (WSNs) are becoming more and more popular and are making significant advances in wireless communication thanks to low-cost and low-power sensors. However, since WSN nodes are battery-powered, they lose all of their autonomy after a certain time. This energy restriction impacts the network’s lifetime. Clustering can increase the lifetime of a network while also lowering energy use. Clustering will bring several similar sensors to one location for data collection and delivery to the Base Station (BS). The Cluster Head (CH) uses more energy when collecting and transferring data. The life of the WSNs can be extended, and efficient identification of CH can minimize energy consumption. Creating a routing algorithm that considers the key challenges of lowering energy usage and maximizing network lifetime is still challenging. This paper presents an energy-efficient clustering routing protocol based on a hybrid Mayfly-Aquila optimization (MFA-AOA) algorithm for solving these critical issues in WSNs. The Mayfly algorithm is employed to choose an optimal CH from a collection of nodes. The Aquila optimization algorithm identifies and selects the optimum route between CH and BS. The simulation results showed that the proposed methodology achieved better energy consumption by 10.22%, 11.26%, and 14.28%, and normalized energy by 9.56%, 11.78%, and 13.76% than the existing state-of-art approaches.
<strong><span>: </span></strong><span>In data mining classification techniques are used to predict group membership for data instances. These techniques are capable of processing a wider variety of data and the output can be easily interpreted. The aim of any classification algorithm is the design and conception of a standard model with reference to the given input. The model thus generated may be deployed to classify new examples or enable a better comprehension of available data. Medical data classification is the process of transforming descriptions of medical diagnoses and procedures used to find hidden information. Two experiments are performed to identify the prediction accuracy of Cardiovascular Disease (CVD).A hybrid approach for classification is proposed in this paper by combining the results of the associate classifier and artificial neural networks (MLP). The first experiment is performed using associative classifier to identify the key attributes which contribute more towards the decision by taking the 13 independent attributes as input. Subsequently classification using Multi Layer Perceptrons (MLP) also performed to generate the accuracy of prediction using all attributes. In the second experiment, identified key attributes using associative classifier are used as inputs for the feed forward neural networks for predicting the presence or absence of CVD.</span>
Nowadays, the Long-Term Evolution-Advanced system is widely used to provide 5G communication due to its improved network capacity and less delay during communication. The main issues in the 5G network are insufficient user resources and burst errors, because it creates losses in data transmission. In order to overcome this, an effective Radio Resource Management (RRM) is required to be developed in the 5G network. In this paper, the Long Short-Term Memory (LSTM) network is proposed to develop the radio resource management in the 5G network. The proposed LSTM-RRM is used for assigning an adequate power and bandwidth to the desired user equipment of the network. Moreover, the Grid Search Optimization (GSO) is used for identifying the optimal hyperparameter values for LSTM. In radio resource management, a request queue is used to avoid the unwanted resource allocation in the network. Moreover, the losses during transmission are minimized by using frequency interleaving and guard level insertion. The performance of the LSTM-RRM method has been analyzed in terms of throughput, outage percentage, dual connectivity, User Sum Rate (USR), Threshold Sum Rate (TSR), Outdoor Sum Rate (OSR), threshold guaranteed rate, indoor guaranteed rate, and outdoor guaranteed rate. The indoor guaranteed rate of LSTM-RRM for 1400 m of building distance improved up to 75.38% compared to the existing QOC-RRM.
Nowadays, a large number of digital data are transmitted worldwide using wireless communications. Therefore, data security is a significant task in communication to prevent cybercrimes and avoid information loss. The Advanced Encryption Standard (AES) is a highly efficient secure mechanism that outperforms other symmetric key cryptographic algorithms using message secrecy. However, AES is efficient in terms of software and hardware implementation, and numerous modifications are done in the conventional AES architecture to improve the performance. This research article proposes a significant modification to the AES architecture’s key expansion section to increase the speed of producing subkeys. The fork–join model of key expansion (FJMKE) architecture is developed to improve the speed of the subkey generation process, whereas the hardware resources of AES are minimized by avoiding the frequent computation of secret keys. The AES-FJMKE architecture generates all of the required subkeys in less than half the time required by the conventional architecture. The proposed AES-FJMKE architecture is designed and simulated using the Xilinx ISE 5.1 software. The Field Programmable Gate Arrays (FPGAs) behaviour of the AES-FJMKE architecture is analysed by means of performance count for hardware resources, delay, and operating frequency. The existing AES architectures such as typical AES, AES-PNSG, AES-AT, AES-BE, ISAES, AES-RS, and AES-MPPRM are used to evaluate the efficiency of AES-FJMKE. The AES-FJMKE implemented using Spartan 6 FPGA used fewer slices (i.e., 76) than the AES-RS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.