A Framework for Delineation of Trees is used to detect the trees of the urban area and it is a desirable framework in urban are management. Delineation and detection of trees from satellite images is important for the estimation of the trees of the urban and rural area. In existing literature, most of the studies only deal with particular tree crown or tree species detection. There is no particular framework for detecting major tree species of urban area collectively from visually distinguishable satellite images and also there is a major problem of distinguishing trees from their shadows. In this paper we used the texture-based feature extraction methods and proposed new algorithms to extract simple texture features from the panchromatic images. The proposed methodology is divided into three phases. Pre-processing was done in first phase to minimize the unnecessary deformations. Quality of Image is increased by eliminating the dark spots and shadows etc. in second phase image segmentation was done to identify the image boundaries and objects in the image. Finally, in the third phase, a framework for delineation of trees was done using image enhancement and segmentation algorithms. The ultimate purpose that the knowledge obtained from the study is to developing a framework that can process shadows and then effectively detect and extract trees from satellite images.
The utilization of artificial intelligence and computer vision has been extensively explored in the context of human activity and behavior recognition. Numerous researchers have investigated and suggested various techniques for human action recognition (HAR) to accurately identify actions from real-time videos. Among these techniques, convolutional neural networks (CNNs) have emerged as the most effective and widely used for activity recognition. This work primarily focuses on the significance of spatial information in activity/action classification. To identify human actions and behaviors from large video datasets, this paper proposes a two-stream spatial CNN approach. One stream, based on RGB data, is fed with the spatial information from unprocessed RGB frames. The second stream is powered by graph-based visual saliency maps generated by GBVS (Graph-Based Visual Saliency) method. The outputs of the two spatial streams were combined using sum, max, average, and product feature fusion techniques. The proposed method is evaluated on well-known benchmark human action datasets, such as KTH, UCF101, HMDB51, NTU RGB-D, and G3D, to assess its performance Promising recognition rates were observed on all datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.