Dilated Convolutions have been shown to be highly useful for the task of image segmentation. By introducing gaps into convolutional filters, they enable the use of larger receptive fields without increasing the original kernel size. Even though this allows for the inexpensive capturing of features at different scales, the structure of the dilated convolutional filter leads to a loss of information.We hypothesise that inexpensive modifications to Dilated Convolutional Neural Networks, such as additional averaging layers, could overcome this limitation. In this project we test this hypothesis by evaluating the effect of these modifications for a state-of-the art image segmentation system and compare them to existing approaches with the same objective.Our experiments show that our proposed methods improve the performance of dilated convolutions for image segmentation. Crucially, our modifications achieve these results at a much lower computational cost than previous smoothing approaches.
We provide matching upper and lower bounds of order σ 2 / log(d/n) for the prediction error of the minimum 1-norm interpolator, a.k.a. basis pursuit. Our result is tight up to negligible terms when d n, and is the first to imply asymptotic consistency of noisy minimum-norm interpolation for isotropic features and sparse ground truths. Our work complements the literature on "benign overfitting" for minimum 2-norm interpolation, where asymptotic consistency can be achieved only when the features are effectively low-dimensional.
Good generalization performance on high-dimensional data crucially hinges on a simple structure of the ground truth and a corresponding strong inductive bias of the estimator. Even though this intuition is valid for regularized models, in this paper we caution against a strong inductive bias for interpolation in the presence of noise: Our results suggest that, while a stronger inductive bias encourages a simpler structure that is more aligned with the ground truth, it also increases the detrimental effect of noise. Specifically, for both linear regression and classification with a sparse ground truth, we prove that minimum p-norm and maximum p-margin interpolators achieve fast polynomial rates up to order 1/n for p > 1 compared to a logarithmic rate for p = 1. Finally, we provide experimental evidence that this trade-off may also play a crucial role in understanding non-linear interpolating models used in practice.
Differential privacy is the de facto standard for protecting privacy in a variety of applications. One of the key challenges is private data release, which is particularly relevant in scenarios where limited information about the desired statistics is available beforehand. Recent work has presented a differentially private data release algorithm that achieves optimal rates of order n −1/d , with n being the size of the dataset and d being the dimension, for the worst-case error over all Lipschitz continuous statistics. This type of guarantee is desirable in many practical applications, as for instance it ensures that clusters present in the data are preserved. However, due to the "slow" rates, it is often infeasible in practice unless the dimension of the data is small. We demonstrate that these rates can be significantly improved to n −1/s when only guarantees over s-sparse Lipschitz continuous functions are required, or to n −1/(s+1) when the data lies on an unknown s-dimensional subspace, disregarding logarithmic factors. We therefore obtain practically meaningful rates for moderate constants s which motivates future work on computationally efficient approximate algorithms for this problem.
Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers. These findings suggest that ridge regularization has vanishing benefits in high dimensions. We challenge this narrative by showing that, even in the absence of noise, avoiding interpolation through ridge regularization can significantly improve generalization. We prove this phenomenon for the robust risk of both linear regression and classification and hence provide the first theoretical result on robust overfitting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.