Fifth-generation (5G)-enabled vehicular fog computing technologies have always been at the forefront of innovation because they support smart transport like the sharing of traffic data and cooperative processing in the urban fabric. Nevertheless, the most important factors limiting progress are concerns over message protection and safety. To cope with these challenges, several scholars have proposed certificateless authentication schemes with pseudonyms and traceability. These schemes avoid complicated management of certificate and escrow of key in the public key infrastructure-based approaches in the identity-based approaches, respectively. Nevertheless, problems such as high communication costs, security holes, and computational complexity still exist. Therefore, this paper proposes an efficient certificateless authentication called the ECA-VFog scheme for fog computing with 5G-assisted vehicular systems. The proposed ECA-VFog scheme applied efficient operations based on elliptic curve cryptography that is supported by a fog server through a 5G-base station. This work conducts a safety analysis of the security designs to analysis the viability and value of the proposed ECA-VFog scheme. In the performance ovulation section, the computation costs for signing and verification process are 2.3539 ms and 1.5752 ms, respectively. While, the communication costs and energy consumption overhead of the ECA-VFog are 124 bytes and 25.610432 mJ, respectively. Moreover, comparing the ECA-VFog scheme to other existing schemes, the performance estimation reveals that it is more cost-effective with regard to computation cost, communication cost, and energy consumption.
In videos, the human's actions are of three-dimensional (3D) signals. These videos investigate the spatiotemporal knowledge of human behavior. The promising ability is investigated using 3D convolution neural networks (CNNs). The 3D CNNs have not yet achieved high output for their well-established two-dimensional (2D) equivalents in still photographs. Board 3D Convolutional Memory and Spatiotemporal fusion face training difficulty preventing 3D CNN from accomplishing remarkable evaluation. In this paper, we implement Hybrid Deep Learning Architecture that combines STIP and 3D CNN features to enhance the performance of 3D videos effectively. After implementation, the more detailed and deeper charting for training in each circle of space-time fusion. The training model further enhances the results after handling complicated evaluations of models. The video classification model is used in this implemented model. Intelligent 3D Network Protocol for Multimedia Data Classification using Deep Learning is introduced to further understand space-time association in human endeavors. In the implementation of the result, the well-known dataset, i.e., UCF101 to, evaluates the performance of the proposed hybrid technique. The results beat the proposed hybrid technique that substantially beats the initial 3D CNNs. The results are compared with state-of-the-art frameworks from literature for action recognition on UCF101 with an accuracy of 95%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.