We i n vestigate how TV regularization naturally recognizes scale of individual image features, and we show h o w perception of scale depends on the amount of regularization applied to the image. We give an automatic method for nding the minimum value of the regularization parameter needed to remove all features below a user-chosen threshold. We explain the relation of Meyer's G norm to the perception of scale, which p r o vides a more intuitive understanding of this norm. We consider other applications of this ability to recognize scale, including the multiscale e ects of TV regularization and the rate of loss of image features of various scales as a function of increasing amounts of regularization. Several numerical results are given.
In this paper, we study the behavior of solutions of the ODE associated to Nesterov acceleration. It is well-known since the pioneering work of Nesterov that the rate of convergence O(1/t 2 ) is optimal for the class of convex functions. In this work, we show that better convergence rates can be obtained with some additional geometrical conditions, such as Lojasiewicz property. More precisely, we prove the optimal convergence rates that can be obtained depending on the geometry of the function F to minimize. The convergence rates are new, and they shed new light on the behavior of Nesterov acceleration schemes. We prove in particular that the classical Nesterov scheme may provide convergence rates that are worse than the classical gradient descent scheme on sharp functions: for instance, the convergence rate for strongly convex functions is not geometric for the classical Nesterov scheme (while it is the case for the gradient descent algorithm). This shows that applying the classical Nesterov acceleration on convex functions without looking more at the geometrical properties of the objective functions may lead to sub-optimal algorithms.
Solving ill-posed inverse problems can be done accurately if a regularizer well adapted to the nature of the data is available. Such regularizer can be systematically linked with the distribution of the data itself through the maximum a posteriori Bayesian framework. Recently, regularizers designed with the help of deep neural networks (DNN) received impressive success. Such regularizers are typically learned from large datasets. To reduce the computational burden of this task, we propose to adapt the compressive learning framework to the learning of regularizers parametrized by DNN. Our work shows the feasibility of batchless learning of regularizers from a compressed dataset. In order to achieve this, we propose an approximation of the compression operator that can be calculated explicitly for the task of learning a regularizer by DNN. We show that the proposed regularizer is capable of modeling complex regularity prior and can be used for denoising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.