PurposeThe purpose of this study is to test the Internet of things (IoT) devices with respect to reliability and quality.Design/methodology/approachIn this paper, the authors have presented the analysis on design metrics such as perception, communication and computation layers for a constrained environment. In this paper, based on their literature survey, the authors have also presented a study that shows multipath routing is more efficient than single-path, and the retransmission mechanism is not preferable in an IoT environment.FindingsThis paper discusses the reliability of various layers of IoT subject methodologies used in those layers. The authors ran performance tests on Arduino nano and raspberry pi using the AES-128 algorithm. It was empirically determined that the time required to process a message increases exponentially and is more than what benchmark time estimates as the message size is increased. From these results, the authors can accurately determine the optimal size of the message that can be processed by an IoT system employing controllers, which are running 8-bit or 64-bit architectures.Originality/valueThe authors have tested the performance of standard security algorithms on different computational architectures and discuss the implications of the results. Empirical results demonstrate that encryption and decryption times increase nonlinearly rather than linearly as message size increases.
Behaviors, actions, pose, facial expressions, and speech are considered as channels that convey human emotions. Extensive research has been carried out to explore the relationships between these channels and emotions. The proposed method consists of a neural network-based solution combined with image processing and speech processing to classify the universal emotions: happy, anger, sad, and neutral. Speech processing includes extraction of spectral and temporal features like MFCC, energy, and then a set of values is given as input to the neural network. In image processing, Gabor filter texture features are used to extract a set of selected feature points. Mutual information is calculated and given as an input to the neural network for classification. The experimental results demonstrate the efficacy of audio-visual cues especially using few prominent features as overall accuracy of the combined approach is above 85%.
Visually impaired people face a lot of challenges while choosing clothes with complex patterns and colors. Rotation, scaling and variation in the light makes the cloth recognition problem a challenging task. An automatic cloth pattern recognition technique to classify the patterns into four classes namely plaid, striped, irregular and Patternless is developed using image processing, machine learning and deep learning concepts in this work. MATLAB is used as the simulation tool of choice. Color classification is done with the help of Hue Saturation Intensity (HSI) color model. To recognize clothing patterns, global and local features are extracted. Features extracted include Radon signatures and Grey Level Co-occurrence matrix. Pattern recognition has been done with the help of machine algorithms such as KNN, SVM, and deep learning networks such as AlexNet, GoogleNet, VGG-16 and VGG-19. To evaluate the effectiveness of the algorithms, CCNY Clothing Pattern data-set has been used. The maximum accuracy of 97.9% was obtained using the VGG-19 deep neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.