We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)-vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4% ∼10% for our three tested videos in the experimental results of vehicle counting.
This paper proposes a new image-descreening technique based on texture classification using a cellular neural network (CNN) with template trained by genetic algorithm (GA), called GA-CNN. Instead of using the fixed filters for image descreening, we are equipped with a more pliable mechanism for classifications in screening patterns. Using CNN makes it possible to get an accurate texture classification result in a faster speed by its superiority of implementable hardware and the flexible choices of templates. The use of the GA here helps us to look for the most appropriate template for CNNs more adaptively and methodically. The evolved parameters in the template for CNNs can not only provide a quicker classification mechanism but also help us with a better texture classification for screening patterns. After the class of screening patterns in the querying images is determined by the trained GA-CNN-based texture classification system, the recommendatory filters are induced to solve the screening problems. The induction of the classification in screening patterns has simplified the choice of filters and made it valueless to determine a new structured filter. Eventually, our comprehensive methodology is going to be topped off with more desirable results and the indication for the decrease in time complexity.Index Terms-Cellular neural network (CNN), genetic algorithm (GA), image descreening, texture classification.
We present in this paper a modified independent component analysis (mICA) based on the conditional entropy to discriminate unsorted independent components. We make use of the conditional entropy to select an appropriate subset of the ICA features with superior capability in classification and apply support vector machine (SVM) to recognizing patterns of human and nonhuman. Moreover, we use the models of background images based on Gaussian mixture model (GMM) to handle images with complicated backgrounds. Also, the color-based shadow elimination and head models in ellipse shapes are combined to improve the performance of moving objects extraction and recognition in our system. Our proposed tracking mechanism monitors the movement of humans, animals, or vehicles within a surveillance area and keeps tracking the moving pedestrians by using the color information in HSV domain. Our tracking mechanism uses the Kalman filter to predict locations of moving objects for the conditions in lack of color information of detected objects. Finally, our experimental results show that our proposed approach can perform well for real-time applications in both indoor and outdoor environments.
In this paper, we develop a vision based obstacle detection system by utilizing our proposed fisheye lens inverse perspective mapping (FLIPM) method. The new mapping equations are derived to transform the images captured by the fisheye lens camera into the undistorted remapped ones under practical circumstances. In the obstacle detection, we make use of the features of vertical edges on objects from remapped images to indicate the relative positions of obstacles. The static information of remapped images in the current frame is referred to determining the features of source images in the searching stage from either the profile or temporal IPM difference image. The profile image can be acquired by several processes such as sharpening, edge detection, morphological operation, and modified thinning algorithms on the remapped image. The temporal IPM difference image can be obtained by a spatial shift on the remapped image in the previous frame. Moreover, the polar histogram and its post-processing procedures will be used to indicate the position and length of feature vectors and to remove noises as well. Our obstacle detection can give drivers the warning signals within a limited distance from nearby vehicles while the detected obstacles are even with the quasi-vertical edges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.