With advances in machine vision systems (e.g., artificial eye, unmanned aerial vehicles, surveillance monitoring) scene semantic recognition (SSR) technology has attracted much attention due to its related applications such as autonomous driving, tourist navigation, intelligent traffic and remote aerial sensing. Although tremendous progress has been made in visual interpretation, several challenges remain (i.e., dynamic backgrounds, occlusion, lack of labeled data, changes in illumination, direction, and size). Therefore, we have proposed a novel SSR framework that intelligently segments the locations of objects, generates a novel Bag of Features, and recognizes scenes via Maximum Entropy. First, denoising and smoothing are applied on scene data. Second, modified Fuzzy C-Means integrates with super-pixels and Random Forest for the segmentation of objects. Third, these segmented objects are used to extract a novel Bag of Features that concatenate different blobs, multiple orientations, Fourier transform and geometrical points over the objects. An Artificial Neural Network recognizes the multiple objects using the different patterns of objects. Finally, labels are estimated via Maximum Entropy model. During experimental evaluation, our proposed system illustrated a remarkable mean accuracy rate of 90.07% over the MSRC dataset and 89.26% over the Caltech 101 for object recognition, and 93.53% over the Pascal-VOC12 dataset for scene recognition, respectively. The proposed system should be applicable to various emerging technologies, such as augmented reality, to represent the real-world environment for military training and engineering design, as well as for entertainment, artificial eyes for visually impaired people and traffic monitoring to avoid congestion or road accidents.
Increased traffic density, combined with global population development, has resulted in increasingly congested roads, increased air pollution, and increased accidents. Globally, the overall number of automobiles has expanded dramatically during the last decade. Traffic monitoring in this environment is undoubtedly a significant difficulty in various developing countries. This work introduced a novel vehicle detection and classification system for smart traffic monitoring that uses a convolutional neural network (CNN) to segment aerial imagery. These segmented images are examined to further detect the vehicles by incorporating novel customized pyramid pooling. Then, these detected vehicles are classified into various subcategories. Finally, these vehicles are tracked via Kalman filter (KF) and kernelized filter-based techniques to cope with and manage massive traffic flows with minimal human intervention. During the experimental evaluation, our proposed system illustrated a remarkable vehicle detection rate of 95.78% over the Vehicle Aerial Imagery from a Drone (VAID), 95.18% over the Vehicle Detection in Aerial Imagery (VEDAI), and 93.13% over the German Aerospace Center (DLR) DLR3K datasets, respectively. The proposed system has a variety of applications, including identifying vehicles in traffic, sensing traffic congestion on a road, traffic density at intersections, detecting various types of vehicles, and providing a path for pedestrians.
Object recognition in depth images is challenging and persistent task in machine vision, robotics, and automation of sustainability. Object recognition tasks are a challenging part of various multimedia technologies for video surveillance, human–computer interaction, robotic navigation, drone targeting, tourist guidance, and medical diagnostics. However, the symmetry that exists in real-world objects plays a significant role in perception and recognition of objects in both humans and machines. With advances in depth sensor technology, numerous researchers have recently proposed RGB-D object recognition techniques. In this paper, we introduce a sustainable object recognition framework that is consistent despite any change in the environment, and can recognize and analyze RGB-D objects in complex indoor scenarios. Firstly, after acquiring a depth image, the point cloud and the depth maps are extracted to obtain the planes. Then, the plane fitting model and the proposed modified maximum likelihood estimation sampling consensus (MMLESAC) are applied as a segmentation process. Then, depth kernel descriptors (DKDES) over segmented objects are computed for single and multiple object scenarios separately. These DKDES are subsequently carried forward to isometric mapping (IsoMap) for feature space reduction. Finally, the reduced feature vector is forwarded to a kernel sliding perceptron (KSP) for the recognition of objects. Three datasets are used to evaluate four different experiments by employing a cross-validation scheme to validate the proposed model. The experimental results over RGB-D object, RGB-D scene, and NYUDv1 datasets demonstrate overall accuracies of 92.2%, 88.5%, and 90.5% respectively. These results outperform existing state-of-the-art methods and verify the suitability of the method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.