In order to accomplish machine visualization, object detection algorithms strive to recognise all target items in the image and derive the categories and object location information. Numerous ways to solve this problem have been presented where most of them are inspired by computer vision and deep learning methodologies. Existing techniques, on the other hand, consistently fail to recognise small, dense objects and even fail to detect objects with random geometric modifications. Object skeletons can help in object representation and detection. Skeleton is a natural object's inherent visual description that contains comprehensive shape semantics. They supplement the object outline by providing additional information, such as how the object scale in thickness that varies between object components. However, extracting object skeletons from natural photos is difficult because the extractor must be able to capture both local and non-local visual contexts in order to calculate the scale of each skeleton pixel. To solve this issue, a Cusp Pixel Labelled Model with Precise Tuned Outline using Machine Learning (CPLM-PTOML) is proposed in this paper that accurately detects the cusp points of the object in the image by extracting the skeleton of the object to recognize the exact object resided in the image. A multi scale associated outputs to each stage of the image by monitoring the relationship between the receptive field sizes of the different layers in the image and the skeleton scales they can capture. The model is trained via multi-task learning, with one job being skeleton localization, which determines whether or not a pixel is a skeleton pixel, and the other being skeleton scale prediction, which predicts the scale of each skeleton pixel. At various phases, supervision is imposed by directing the scale-associated side outputs toward the ground-truth skeletons at the appropriate scales. The proposed model is compared with the traditional model in terms Data Training Samples, Object Feature Extraction Time Levels, Feature Extraction Accuracy, Cusp Pixel Labelling Time Levels, Cusp Pixel Identification Accuracy, Cusp Point Linking Accuracy, Image Considered and Cusp Point Recognition levels and the results exhibit that the proposed model exhibits better outcome.
With the tremendous advancements in digital image processing technology over the last few years, it is now possible to resolve many challenging issues. In light of this, this study proposes that digital image processing can be used to detect shadows in photographs. Since unmanned aerial vehicles and satellite devices have become more common image generating devices. The significant issue in the generated images is of its shadow. Shadows are inevitable in remote sensing photographs, particularly in metropolitan environments, due to the block of high-rise objects and the influence of the sun's altitude. This results in missing information in the shadow zone. The state-of-the-art shadow detection algorithms require manual alignment and predefined specific parameters. Most of those existing algorithms fail to deliver precise results in a variety of lighting and ecological conditions. To overcome these limitations, we propose a framework Multi Layered Linked approach with Tagged Feature Model for Shadow Angle Detection (MLTFM-SAD). The aim of the proposed model is to detect the shadows from aerial photographs and angle of those shadows. The proposed framework is a three-step approach. Initially, the image segmentation is applied on the input images. Second, hybrid Gaussian mixing mode and Otsu's approach is applied on the segmented shadow mask map and corresponding pixel set is generated. As a result, an initial shadow mask map is refined using object spectral attributes and spatial correlations between objects. Finally, the angle at which the shadow appears in the given image is recognised and analysed. The proposed method's performance is compared to that of all current approaches and the results revealed that the proposed model performance levels are superior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.