In this paper, Deep Neural Networks (DNN) with Bat Algorithms (BA) offer a dynamic form of traffic control in Vehicular Adhoc Networks (VANETs). The former is used to route vehicles across highly congested paths to enhance efficiency, with a lower average latency. The latter is combined with the Internet of Things (IoT) and it moves across the VANETs to analyze the traffic congestion status between the network nodes. The experimental analysis tests the effectiveness of DNN-IoT-BA in various machine or deep learning algorithms in VANETs. DNN-IoT-BA is validated through various network metrics, like packet delivery ratio, latency and packet error rate. The simulation results show that the proposed method provides lower energy consumption and latency than conventional methods to support real-time traffic conditions.
Computational methods for machine learning (ML) have shown their meaning for the projection of potential results for informed decisions. Machine learning algorithms have been applied for a long time in many applications requiring the detection of adverse risk factors. This study shows the ability to predict the number of individuals who are affected by the COVID-19[1] as a potential threat to human beings by ML modelling. In this analysis, the risk factors of COVID-19 were exponential smoothing (ES). The Lower Absolute Reductor and Selection Operator, (LASSo), Vector Assistance (SVM), four normal potential forecasts, such as Linear Regression (LR)). [2] Each of these machine-learning models has three distinct kinds of predictions: the number of newly infected COVID 19 people, mortality rates and the recovered COVID-19 estimates in the next 10 days. These approaches are better used in the latest COVID-19 situation, as shown by the findings of the analysis. The LR, that is effective in predicting new cases of corona, death numbers and recovery.
With new telecommunications engineering applications, the cognitive radio (CR) networkbased internet of things (IoT) resolves the bandwidth problem and spectrum problem. However, the CR-IoT routing method sometimes presents issues in terms of road finding, spectrum resource diversity and mobility. This study presents an upgradable cross-layer routing protocol based on CR-IoT to improve routing efficiency and optimize data transmission in a reconfigurable network. In this context, the system is developing a distributed controller which is designed with multiple activities, including load balancing, neighbourhood sensing and machine-learning path construction. The proposed approach is based on network traffic and load and various other network metrics including energy efficiency, network capacity and interference, on an average of 2 bps/Hz/W. The trials are carried out with conventional models, demonstrating the residual energy and resource scalability and robustness of the reconfigurable CR-IoT. INTRODUCTIONWireless networks reconfigurable (RWN) is mainly an adaptive network firmware developed to satisfy the demands of modern applications, changing network topologies and changing network conditions. In particular, the RWM can be reconfiguredThis is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
In recent times, the utility and privacy are trade-off factors with the performance of one factor tends to sacrifice the other. Therefore, the dataset cannot be published without privacy. It is henceforth crucial to maintain an equilibrium between the utility and privacy of data. In this paper, a novel technique on trade-off between the utility and privacy is developed, where the former is developed with a metaheuristic algorithm and the latter is developed using a cryptographic model. The utility is carried out with the process of clustering, and the privacy model encrypts and decrypts the model. At first, the input datasets are clustered, and after clustering, the privacy of data is maintained. The simulation is conducted on the manufacturing datasets over various existing models. The results show that the proposed model shows improved clustering accuracy and data privacy than the existing models. The evaluation with the proposed model shows a trade-off privacy preservation and utility clustering in smart manufacturing datasets.
In recent years, speech recognition technology has become a more common notion. Speech quality and intelligibility are critical for the convenience and accuracy of information transmission in speech recognition. The speech processing systems used to converse or store speech are usually designed for an environment without any background noise. However, in a real-world atmosphere, background intervention in the form of background noise and channel noise drastically reduces the performance of speech recognition systems, resulting in imprecise information transfer and exhausting the listener. When communication systems’ input or output signals are affected by noise, speech enhancement techniques try to improve their performance. To ensure the correctness of the text produced from speech, it is necessary to reduce the external noises involved in the speech audio. Reducing the external noise in audio is difficult as the speech can be of single, continuous or spontaneous words. In automatic speech recognition, there are various typical speech enhancement algorithms available that have gained considerable attention. However, these enhancement algorithms work well in simple and continuous audio signals only. Thus, in this study, a hybridized speech recognition algorithm to enhance the speech recognition accuracy is proposed. Non-linear spectral subtraction, a well-known speech enhancement algorithm, is optimized with the Hidden Markov Model and tested with 6660 medical speech transcription audio files and 1440 Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) audio files. The performance of the proposed model is compared with those of various typical speech enhancement algorithms, such as iterative signal enhancement algorithm, subspace-based speech enhancement, and non-linear spectral subtraction. The proposed cascaded hybrid algorithm was found to achieve a minimum word error rate of 9.5% and 7.6% for medical speech and RAVDESS speech, respectively. The cascading of the speech enhancement and speech-to-text conversion architectures results in higher accuracy for enhanced speech recognition. The evaluation results confirm the incorporation of the proposed method with real-time automatic speech recognition medical applications where the complexity of terms involved is high.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.