Abstract-In this paper, we investigate adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We use a tree notion in order to partition the space of regressors in a nested structure. The introduced algorithms adapt not only their regression functions but also the complete tree structure while achieving the performance of the "best" linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. While constructing these algorithms, we also avoid using any artificial "weighting" of models (with highly data dependent parameters) and, instead, directly minimize the final regression error, which is the ultimate performance goal. The introduced methods are generic such that they can readily incorporate different tree construction methods such as random trees in their framework and can use different regressor or partitioning functions as demonstrated in the paper.Index Terms-Nonlinear regression, nonlinear adaptive filtering, binary tree, universal, adaptive.
Abstract-We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost. The new family intrinsically combines the higher and lower order measures of the error into a single continuous update based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms that improve the convergence performance of the conventional algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (
Abstract-We study the problem of determining the optimum power allocation policy for an average power constrained jammer operating over an arbitrary additive noise channel, where the aim is to minimize the detection probability of an instantaneously and fully adaptive receiver employing the Neyman-Pearson (NP) criterion. We show that the optimum jamming performance can be achieved via power randomization between at most two different power levels. We also provide sufficient conditions for the improvability and nonimprovability of the jamming performance via power randomization in comparison to a fixed power jamming scheme. Numerical examples are presented to illustrate theoretical results.
Abstract-The optimum power randomization problem is studied to minimize outage probability in flat block-fading Gaussian channels under an average transmit power constraint and in the presence of channel distribution information at the transmitter. When the probability density function of the channel power gain is continuously differentiable with a finite second moment, it is shown that the outage probability curve is a nonincreasing function of the normalized transmit power with at least one inflection point and the total number of inflection points is odd. Based on this result, it is proved that the optimum power transmission strategy involves randomization between at most two power levels. In the case of a single inflection point, the optimum strategy simplifies to on-off signaling for weak transmitters. Through analytical and numerical discussions, it is shown that the proposed framework can be adapted to a wide variety of scenarios including log-normal shadowing, diversity combining over Rayleigh fading channels, Nakagami-m fading, spectrum sharing, and jamming applications. We also show that power randomization does not necessarily improve the outage performance when the finite second moment assumption is violated by the power distribution of the fading.
We study the convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions (where the sum is strongly convex) and a non-smooth convex function. At each iteration, the PIAG method moves along an aggregated gradient formed by incrementally updating gradients of component functions at least once in the last K iterations and takes a proximal step with respect to the non-smooth function. We show that the PIAG algorithm attains an iteration complexity that grows linear in the condition number of the problem and the delay parameter K. This improves upon the previously best known global linear convergence rate of the PIAG algorithm in the literature which has a quadratic dependence on K.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.