Habitat selection can be considered as a hierarchical process in which animals satisfy their habitat requirements at different ecological scales. Theory predicts that spatial and temporal scales should co‐vary in most ecological processes and that the most limiting factors should drive habitat selection at coarse ecological scales, but be less influential at finer scales. Using detailed location data on roe deer Capreolus capreolus inhabiting the Bavarian Forest National Park, Germany, we investigated habitat selection at several spatial and temporal scales. We tested 1) whether time‐varying patterns were governed by factors reported as having the largest effects on fitness, 2) whether the trade‐off between forage and predation risks differed among spatial and temporal scales and 3) if spatial and temporal scales are positively associated. We analysed the variation in habitat selection within the landscape and within home ranges at monthly intervals, with respect to land‐cover type and proxys of food and cover over seasonal and diurnal temporal scales. The fine‐scale temporal variation follows a nycthemeral cycle linked to diurnal variation in human disturbance. The large‐scale variation matches seasonal plant phenology, suggesting food resources being a greater limiting factor than lynx predation risk. The trade‐off between selection for food and cover was similar on seasonal and diurnal scale. Habitat selection at the different scales may be the consequence of the temporal variation and predictability of the limiting factors as much as its association with fitness. The landscape of fear might have less importance at the studied scale of habitat selection than generally accepted because of the predator hunting strategy. Finally, seasonal variation in habitat selection was similar at the large and small spatial scales, which may arise because of the marked philopatry of roe deer. The difference is supposed to be greater for wider ranging herbivores.
In nonparametric classification and regression problems, regularized kernel methods, in particular support vector machines, attract much attention in theoretical and in applied statistics. In an abstract sense, regularized kernel methods (simply called SVMs here) can be seen as regularized M-estimators for a parameter in a (typically infinite dimensional) reproducing kernel Hilbert space. For smooth loss functions L, it is shown that the difference between the estimator, i.e. the empirical SVM f L,Dn,λD n , and the theoretical SVM f L,P,λ0 is asymptotically normal with rate √ n. That is, √ n(f L,Dn,λD n − f L,P,λ0 ) converges weakly to a Gaussian process in the reproducing kernel Hilbert space. As common in real applications, the choice of the regularization parameter D n in f L,Dn,λD n may depend on the data. The proof is done by an application of the functional delta-method and by showing that the SVM-functional P → f L,P,λ is suitably Hadamard-differentiable.
In computational sciences, including computational statistics, machine learning, and bioinformatics, most abstracts of articles presenting new supervised learning methods end with a sentence like "our method performed better than existing methods on real data sets", e.g. in terms of error rate. However, these claims are often not based on proper statistical tests and, if such tests are performed (as usual in the machine learning literature), the tested hypothesis is not clearly defined and poor attention is devoted to the type I and type II error. In the present paper we aim to fill this gap by providing a proper statistical framework for hypothesis tests comparing the performance of supervised learning methods based on several real data sets with unknown underlying distribution. After giving a statistical interpretation of ad-hoc tests commonly performed by machine learning scientists, we devote special attention to power issues and suggest a simple method to determine the number of data sets to be included in a comparison study to reach an adequate power. These methods are illustrated through three comparison studies from the literature and an exemplary benchmarking study using gene expression microarray data. All our results can be reproduced using R-codes and data sets available from the companion website
a b s t r a c t Support vector machines (SVMs) have attracted much attention in theoretical and in applied statistics. The main topics of recent interest are consistency, learning rates and robustness. We address the open problem whether SVMs are qualitatively robust. Our results show that SVMs are qualitatively robust for any fixed regularization parameter λ.However, under extremely mild conditions on the SVM, it turns out that SVMs are not qualitatively robust any more for any null sequence λ n , which are the classical sequences needed to obtain universal consistency. This lack of qualitative robustness is of a rather theoretical nature because we show that, in any case, SVMs fulfill a finite sample qualitative robustness property.For a fixed regularization parameter, SVMs can be represented by a functional on the set of all probability measures. Qualitative robustness is proven by showing that this functional is continuous with respect to the topology generated by weak convergence of probability measures. Combined with the existence and uniqueness of SVMs, our results show that SVMs are the solutions of a well-posed mathematical problem in Hadamard's sense.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.