We introduce a continuous-time framework for the prediction of outstanding liabilities, in which chain-ladder development factors arise as a histogram estimator of a cost-weighted hazard function running in reversed development time. We use this formulation to show that under our assumptions on the individual data chain-ladder is consistent. Consistency is understood in the sense that both the number of observed claims grows to infinity and the level of aggregation tends to zero. We propose alternatives to chain-ladder development factors by replacing the histogram estimator with kernel smoothers and by estimating a cost-weighted density instead of a cost-weighted hazard. Finally, we provide a real-data example and a simulation study confirming the strengths of the proposed alternatives.We start by putting the unique sampling scheme of chain-ladder into a micro-structure framework. We observe counting processes (N i (t)) t∈[0,T ] , T > 0, for claims i = 1, . . . n and call t development time. Each counting process starts with value zero at the underwriting date underlying its claim. It jumps, with jump-size one, whenever a payment is made. Additionally to every jump, we observe a mark indicating the size of the payment made. The number of counting processes, n, varies over calendar-time: We follow retrospectively only those claims for which at least one payment has been observed, i.e., we do not follow every claim in the policy book. In this paper, we make the following assumptions.[M1] All claims are independent.[M2] Every claim consists of only one payment.Assumptions [M1] and [M2] are rather strong but are made to simplify the mathematical derivations yielding a first and clean step towards a better understanding of chain-ladder on a
In-sample forecasting is a recent continuous modification of well-known forecasting methods based on aggregated data. These aggregated methods are known as age-cohort methods in demography, economics, epidemiology and sociology and as chain ladder in non-life insurance. Data is organized in a two-way table with age and cohort as indices, but without measures of exposure. It has recently been established that such structured forecasting methods based on aggregated data can be interpreted as structured histogram estimators. Continuous in-sample forecasting transfers these classical forecasting models into a modern statistical world including smoothing methodology that is more efficient than smoothing via histograms. All in-sample forecasting estimators are collected and their performance is compared via a finite sample simulation study.All methods are extended via multiplicative bias correction. Asymptotic theory is being developed for the histogram-type method of sieves and for the multiplicatively corrected estimators. The multiplicative bias corrected estimators improve all other known in-sample forecasters in the simulation study. The density projection approach seems to have the best performance with forecasting based on survival densities being the runner-up. not reported) claims is often solved via model (2): For each past claim, one considers the date (cohort i) when the accident had happened and the delay (age k) there was until the claim was reported to the insurer.Hence, cohort and age satisfy i + k − 1 ≤ today; given a certain year-wise aggregation. This information is then used to estimate the number of future claims µ ik , i + k − 1 > today, for accidents in the past, i ≤ today.Under model (2), the parameters α i and β k for each cohort i and age k can be estimated from past data. 50Assuming a maximum delay (usually 7 to 10 years in practice, depending on the business line), the estimates of the parameters can be used to forecast the number of future claims with i + k − 1 > today. More details of this age-cohort-reserving example are given in the recent contribution Harnau and Nielsen (2018) and are also included in the highly-cited overview paper of actuarial reserving (England and Verrall, 2002).Other examples where no significant period effect has been found include among many others cancer stud-55 ies (Leung et al., 2002;Remontet et al., 2003), returns due to education (Duraisamy, 2002), unemployment numbers (Wilke, 2017), mesothelioma mortality (Peto et al., 1995;Martínez-Miranda et al., 2014).Given the importance of age-period-cohort models and age-cohort models, it is surprising that continuous versions have not been considered much in the literature. Continuous modeling avoids inefficient pre-smoothing and is in line with recent trends around big data and the drive of modeling and understanding 60 every individual separately. Modeling every individual separately, possibly with additional covariates, results in the estimation of a large number of parameters. An increase of dimension means that ...
We introduce a generalization of the one-dimensional accelerated failure time model allowing the covariate effect to be any positive function of the covariate. This function and the baseline hazard rate are estimated nonparametrically via an iterative algorithm. In an application in non-life reserving, the survival time models the settlement delay of a claim and the covariate effect is often called operational time. The accident date of a claim serves as covariate. The estimated hazard rate is a nonparametric continuous-time alternative to chain-ladder development factors in reserving and is used to forecast outstanding liabilities. Hence, we provide an extension of the chain-ladder framework for claim numbers without the assumption of independence between settlement delay and accident date. Our proposed algorithm is an unsupervised learning approach to reserving that detects operational time in the data and adjusts for it in the estimation process. Advantages of the new estimation method are illustrated in a data set consisting of paid claims from a motor insurance business line on which we forecast the number of outstanding claims.
Smooth backfitting was first introduced in an additive regression setting via a direct projection alternative to the classic backfitting method by Buja, Hastie and Tibshirani. This paper translates the original smooth backfitting concept to a survival model considering an additively structured hazard. The model allows for censoring and truncation patterns occurring in many applications such as medical studies or actuarial reserving. Our estimators are shown to be a projection of the data into the space of multivariate hazard functions with smooth additive components. Hence, our hazard estimator is the closest nonparametric additive fit even if the actual hazard rate is not additive. This is different to other additive structure estimators where it is not clear what is being estimated if the model is not true. We provide full asymptotic theory for our estimators. We provide an implementation the proposed estimators that show good performance in practice even for high dimensional covariates.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.