Traditionally, nonlinear data processing has been approached via the use of polynomial filters, which are straightforward expansions of many linear methods, or through the use of neural network techniques. In contrast to linear approaches, which often provide algorithms that are simple to apply, nonlinear learning machines such as neural networks demand more computing and are more likely to have nonlinear optimization difficulties, which are more difficult to solve. Kernel methods, a recently developed technology, are strong machine learning approaches that have a less complicated architecture and give a straightforward way to transforming nonlinear optimization issues into convex optimization problems. Typical analytical tasks in kernel-based learning include classification, regression, and clustering, all of which are compromised. For image processing applications, a semisupervised deep learning approach, which is driven by a little amount of labeled data and a large amount of unlabeled data, has shown excellent performance in recent years. For their part, today’s semisupervised learning methods operate on the assumption that both labeled and unlabeled information are distributed in a similar manner, and their performance is mostly impacted by the fact that the two data sets are in a similar state of distribution as well. When there is out-of-class data in unlabeled data, the system’s performance will be adversely affected. When used in real-world applications, the capacity to verify that unlabeled data does not include data that belongs to a different category is difficult to obtain, and this is especially true in the field of synthetic aperture radar image identification (SAR). Using threshold filtering, this work addresses the problem of unlabeled input, including out-of-class data, having a detrimental influence on the performance of the model when it is utilized to train the model in a semisupervised learning environment. When the model is being trained, unlabeled data that does not belong to a category is filtered out by the model using two different sets of data that the model selects in order to optimize its performance. A series of experiments was carried out on the MSTAR data set, and the superiority of our method was shown when it was compared against a large number of current semisupervised classification algorithms of the highest level of sophistication. This was especially true when the unlabeled data had a significant proportion of data that did not fall into any of the categories. The performance of each kernel function is tested independently using two metrics, namely, the false alarm (FA) and the target miss (TM), respectively. These factors are used to calculate the proportion of incorrect judgments made using the techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.