We investigate the estimation of the extreme value index when the data are subject to random censorship. We prove, in a unified way, detailed asymptotic normality results for various estimators of the extreme value index and use these estimators as the main building block for estimators of extreme quantiles. We illustrate the quality of these methods by a small simulation study and apply the estimators to medical data.
We investigate the estimation of the extreme value index when the data are subject to random censorship. We prove, in a unified way, detailed asymptotic normality results for various estimators of the extreme value index and use these estimators as the main building block for estimators of extreme quantiles. We illustrate the quality of these methods by a small simulation study and apply the estimators to medical data.Comment: Published in at http://dx.doi.org/10.3150/07-BEJ104 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Curve estimation problems can often be formulated in terms of a closed and convex parameter set embedded in a real Hilbert space. This is the case, for instance, if the curve of interest is a monotone or convex density or regression function, the support function of a convex set, or the Pickands dependence function of an extreme-value copula. The topic of this paper is the estimator that results when an arbitrary initial estimator possibly falling outside the parameter set is projected onto this parameter set. If direct computation of the projection is infeasible, the full parameter set can be replaced by an approximating sequence of finite-dimensional subsets. Asymptotic properties of the initial estimator sequence in the Hilbert space topology transfer easily to those of the projected sequence and its approximating sequence. 1 INTRODUCTION Suppose we wish to estimate a function or a vector of functions subject to shape constraints. The functions could for instance be regression functions, probability density functions, hazard rates, and so on, and the shape constraint could for instance be that the functions are monotone, convex, non-negative, or a combination thereof. We have at our disposal an estimator, but unfortunately this estimator is not guaranteed to satisfy the constraints. Then how to modify this estimator so that the constraints are met? If all relevant information in the sample is already contained in the initial estimator, then the modified estimator should depend on the data only through this initial estimator. Moreover, the modification should be as small as possible, to be measured along some metric on the appropriate function class. Consider for instance the problem of estimating a regression function that is known to be non-decreasing. Mammen [18] proposes a two-step procedure whereby an initial Nadaraya-Watson kernel estimator is isotonized
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.