The performance of Wi-Fi fingerprinting indoor localization systems (ILS) in indoor environments depends on the channel state information (CSI) that is usually restricted because of the fading effect of the multipath. Commonly referred to as the next positioning generation (NPG), the Wi-Fi™, IEEE 802.11az standard offers physical layer characteristics that allow positioning and enhanced ranging using conventional methods. Therefore, it is essential to create an indoor environment dataset of fingerprints of CIR based on 802.11az signals, and label all these fingerprints by their location data estimate STA locations based on a portion of the dataset for fingerprints. This work develops a model for training a convolutional neural network (CNN) for positioning and localization through generating IEEE® 802.11data. The study includes the use of a trained CNN to predict the position or location of several stations according to fingerprint data. This includes evaluating the performance of the CNN for multiple channel impulses responses (CIRs). Deep learning and Fingerprinting algorithms are employed in Wi-Fi positioning models to create a dataset through sampling the fingerprints channel at recognized positions in an environment. The model predicts the locations of a user according to a signal acknowledged of an unidentified position via a reference database. The work also discusses the influence of antenna array size and channel bandwidth on performance. It is shown that the increased training epochs and number of STAs improve the network performance. The results have been proven by a confusion matrix that summarizes and visualizes the undertaking classification technique. We use a limited dataset for simplicity and last in a short simulation time but a higher performance is achieved by training a larger data.
In the era of information technology, users had to send millions of images back and forth daily. It's crucial to secure these photos. It is important to secure image content using digital image encryption. Using secret keys, digital images are transformed into noisy images in image encryption techniques, and the same keys are needed to restore the images to their original form. The majority of image encryption methods rely on two processes: confusion and diffusion. However, previous studies didn’t compare recent techniques in the image encryption field.This research presents an evaluation of three types of image encryption algorithms includinga Fibonacci Q-matrix in hyperchaotic, Secure Internet of Things (SIT), and AES techniques. The Fibonacci Q-matrix in the hyperchaotic technique makes use of a six-dimension hyperchaotic system's randomly generated numbers and confuses the original image to dilute the permuted image. The objectives here areto analyze the image encryption process for the Fibonacci Q-matrix in hyperchaotic, Secure Internet of Things (SIT), and Advanced Encryption Standard (AES), and compare their encryption robustness. The discussed image encryption techniques were examined through histograms, entropy, Unified Average Changing Intensity (UACI), Number of Pixels Change Rate (NPCR), and correlation coefficients. Since the values of the Chi-squared test were less than (293) for the Hyperchaotic System & Fibonacci Q-matrix method, this indicates that this technique has a uniform distribution and is more efficient. The obtained results provide important confirmation that the image encryption using Fibonacci Q-matrix in hyperchaotic algorithm performed better than both the AES and SIT based on the image values of UACI and NPCR.
Audio command recognition methods are essential to be recognized for performing user instructions, especially for people with disabilities. Previous studies couldn’t examine and classify the performance optimization of up to twelve audio commands categories. This work develops a microphone-based audio commands classifier using a convolutional neural network (CNN) with performance optimization to categorize twelve classes including background noise and unknown words. The methodology mainly includes preparing the input audio commands for training, extracting features, and visualizing auditory spectrograms. Then a CNN-based classifier is developed and the trained architecture is evaluated. The work considers minimizing latency by optimizing the processing phase by compiling MATLAB code into C code if the processing phase reaches a peak algorithmically. In addition, the method conducts decreasing the frame size and increases the sample rate that is also contributed to minimizing latency and maximizing the performance of processing audio input data. A modest bit of dropout to the input to the final fully connected layer is added to lessen the likelihood that the network will memorize particular elements of the training data. We explored expanding the network depth by including convolutional identical elements, ReLu, and batch normalization layers to improve the network's accuracy. The training progress demonstrated how fast the accuracy of the network is increasing to reach about 98.1 %, which interprets the ability of the network to over-fit the data of training. This work is essential to serve speech and speaker recognition such as smart homes and smart wheelchairs, especially for people with disabilities
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.