One of the motivations for property testing of boolean functions is the idea that testing can provide a fast preprocessing step before learning. However, in most machine learning applications, it is not possible to request for labels of fictitious examples constructed by the algorithm. Instead, the dominant query paradigm in applied machine learning, called active learning, is one where the algorithm may query for labels, but only on points in a given polynomial-sized (unlabeled) sample, drawn from some underlying distribution D. In this work, we bring this well-studied model in learning to the domain of testing.We develop both general results for this active testing model as well as efficient testing algorithms for a number of important properties for learning, demonstrating that testing can still yield substantial benefits in this restricted setting. For example, we show that testing unions of d intervals can be done with O(1) label requests in our setting, whereas it is known to require Ω(d) labeled examples for learning (and Ω( √ d) for passive testing [41] where the algorithm must pay for every example drawn from D). In fact, our results for testing unions of intervals also yield improvements on prior work in both the classic query model (where any point in the domain can be queried) and the passive testing model as well. For the problem of testing linear separators in R n over the Gaussian distribution, we show that both active and passive testing can be done with O( √ n) queries, substantially less than the Ω(n) needed for learning, with near-matching lower bounds. We also present a general combination result in this model for building testable properties out of others, which we then use to provide testers for a number of assumptions used in semi-supervised learning. In addition to the above results, we also develop a general notion of the testing dimension of a given property with respect to a given distribution, that we show characterizes (up to constant factors) the intrinsic number of label requests needed to test that property. We develop such notions for both the active and passive testing models. We then use these dimensions to prove a number of lower bounds, including for linear separators and the class of dictator functions.Our results show that testing can be a powerful tool in realistic models for learning, and further that active testing exhibits an interesting and rich structure. Our work in addition brings together tools from a range of areas including U-statistics, noise-sensitivity, self-correction, and spectral analysis of random matrices, and develops new tools that may be of independent interest.
We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.
In this paper, a new particle swarm optimization particle filter (NPSO-PF) algorithm is proposed, which is called particle cluster optimization particle filter algorithm with mutation operator, and is used for real-time filtering and noise reduction of nonlinear vibration signals. Because of its introduction of mutation operator, this algorithm overcomes the problem where by particle swarm optimization (PSO) algorithm easily falls into local optimal value, with a low calculation accuracy. At the same time, the distribution and diversity of particles in the sampling process are improved through the mutation operation. The defect of particle filter (PF) algorithm where the particles are poor and the utilization rate is not high is also solved. The mutation control function makes the particle set optimization process happen in the early and late stages, and improves the convergence speed of the particle set, which greatly reduces the running time of the whole algorithm. Simulation experiments show that compared with PF and PSO-PF algorithms, the proposed NPSO-PF algorithm has lower root mean square error, shorter running time, higher signal-to-noise ratio and more stable filtering performance. It is proved that the algorithm is suitable for real-time filtering and noise reduction processing of nonlinear signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.