Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The seminar will be opened by Stephane Loisel and closed by Hansjoerg Albrecher. Network analytics for insurance fraud detection: a critical case study There has been an increasing interest in fraud detection methods, driven by new regulations and by the financial losses linked to fraud. One of the state-of-the-art methods to fight fraud is network analytics. Network analytics leverages the interactions between different entities to detect complex patterns that are indicative of fraud. However, network analytics has only recently been applied to fraud detection in the actuarial literature. Although it shows much potential, many network methods are not yet applied. This paper extends the literature in two main ways. First, we review and apply multiple methods in the context of insurance fraud and assess their predictive power against each other. Second, we analyse the added value of network features over intrinsic features to detect fraud. We conclude that (1) complex methods do not necessarily outperform basic network features, and that (2) network analytics helps to detect different fraud patterns, compared to models trained on claim-specific features alone. An empirical study of profit and loss allocations The profit and loss (P L) attribution for each business year into different risk factors (e.g., interest rates, credit spreads, foreign exchange rate etc.) is a regulatory requirement, e.g., under Solvency 2. Three different decomposition principles are prevalent: one-at-a-time (OAT), sequential updating (SU) and average sequential updating (ASU) decompositions. In this research, using financial market data from 2003 to 2022, we demonstrate that the OAT decomposition can generate significant unexplained P L and that the SU decompositions depends significantly on the order or labeling of the risk factors. On the basis of an investment in a foreign stock, we further explain that the SU decomposition is not able to identify all relevant risk factors. This potentially effects the hedging strategy of the portfolio manager. In conclusion, we suggest to use the ASU decomposition in practice. Efficient simulation and valuation of equity-indexed annuities under a two-factor G2++ model Equity-indexed annuities (EIAs) with investment guarantees are pension products sensitive to changes in the interest rate environment. A flexible and common choice for modelling this risk factor is a Hull–White model in its G2++ variant. We investigate the valuation of EIAs in this model setting and extend the literature by introducing a more efficient framework for Monte-Carlo simulation. In addition, we build on previous work by adapting an approach based on scenario matrices to a two-factor G2++ model. This method does not rely on simulations or on Fourier transformations. In numerical studies, we demonstrate its fast convergence and the limitations of techniques relying on the independence of annual returns and the central limit theorem. Robust asymptotic insurance-finance arbitrage This paper studies the valuation of insurance contracts linked to financial markets, for example through interest rates or in equity-linked insurance products. We build upon the concept of insurance-finance arbitrage as introduced by Artzner et al. (Math Financ, 2024), extending their work by incorporating model uncertainty. This is achieved by introducing statistical uncertainty in the underlying dynamics to be represented by a set of priors *P*. Within this framework we propose the notion of *robust asymptotic insurance-finance arbitrage* (RIFA) and characterize the absence of such strategies in terms of the new concept of *QP*-evaluations. This nonlinear two-step evaluation ensures absence of RIFA. Moreover, it dominates all two-step evaluations, as long as we agree on the set of priors . Our analysis highlights the role of *QP*-evaluations in terms of showing that all two-step evaluations are free of RIFA. Furthermore, we introduce a doubly stochastic model to address uncertainty for surrender and survival, utilizing copulas to define conditional dependence. This setting illustrates how the *QP*-evaluation can be applied for the pricing of hybrid insurance products, highlighting the flexibility and potential of the proposed approach. On duration effects in non-life insurance pricing The paper discusses duration effects on the consistency of mean parameter and dispersion parameter estimators in exponential dispersion families (EDFs) that are the standard models used for non-life insurance pricing. Focus is on the standard generalised linear model assumptions where both the mean and variance, conditional on duration, are linear functions in terms of duration. We derive simple convergence results that highlight consequences when the linear conditional moment assumptions are not satisfied. These results illustrate that: (i) the resulting mean estimators always have a relevant asymptotic interpretation in terms of the duration adjusted actuarially fair premium—a premium that only agrees with the standard actuarial premium using a duration equal to one, given that the expected value is linear in the duration; (ii) deviance based estimators of the dispersion parameter in an EDF should be avoided in favour of Pearson estimators; (iii) unless the linear moment assumptions are satisfied, consistency of dispersion and plug-in variance estimators can not be guaranteed and may result in spurious over-dispersion. The results provide explicit conditions on the underlying data generating process that will lead to spurious over-dispersion that can be used for model checking. This is illustrated based on real insurance data, where it is concluded that the linear moment assumptions are violated, which results in non-negligible spurious over-dispersion. Enhancing actuarial non-life pricing models via transformers Currently, there is a lot of research in the field of neural networks for non-life insurance pricing. The usual goal is to improve the predictive power of actuarial pricing and behavioral models via neural networks while building upon the generalized linear model, which is the current industry standard. Our paper contributes to this current journey via novel methods to enhance actuarial non-life models with transformer models for tabular data. We build here upon the foundation laid out by the combined actuarial neural network as well as the localGLMnet and enhance those models via the feature tokenizer transformer. The manuscript demonstrates the performance of the proposed methods on a real-world claim frequency dataset and compares them with several benchmark models such as generalized linear models, feed-forward neural networks, combined actuarial neural networks, LocalGLMnet, and the pure feature tokenizer transformer. The paper shows that the new methods can achieve better results than the benchmark models while preserving the structure of the underlying actuarial models, thereby inheriting and retaining their advantages. The paper also discusses the practical implications and challenges of applying transformer models in actuarial settings. Bayesian credibility model with heavy tail random variables: calibration of the prior and application to natural disasters and cyber insurance The Bayesian credibility approach is a method for evaluating a certain risk of a segment of a portfolio (such as policyholder or category of policyholders) by compensating for the lack of historical data through the use of a prior distribution. This prior distribution can be thought as a preliminary expertise, that gathers information on the target distribution. This paper describes a particular Bayesian credibility model that is well-suited for situations where collective data are available to compute the prior, and when the distribution of the variables are heavy-tailed. The credibility model we consider aims to obtain a heavy-tailed distribution (namely a Generalized Pareto distribution) at a collective level and provides a closed formula to compute the severity part of the credibility premium at an individual level. Two cases of application are presented: one related to natural disasters and the other to cyber insurance. In the former, a large database on flood events is used as the collective information to define the prior, which is then combined with individual observations at a city level. In the latter, a classical database on data leaks is used to fit a model for the volume of data exposed during a cyber incident, while the historical data on a given firm is taken into account to consider individual experience. Is accumulation risk in cyber methodically underestimated? Many insurers have started to underwrite cyber in recent years. In parallel, they developed their first actuarial models to cope with this new type of risk. On the portfolio level, two major challenges hereby are the adequate modelling of the dependence structure among cyber losses and the lack of suitable data based on which the model is calibrated. The purpose of this article is to highlight the importance of taking a holistic approach to cyber. In particular, we argue that actuarial modelling should not be viewed stand-alone, but rather as an integral part of an interconnected value chain with other processes such as cyber-risk assessment and cyber-claims settlement. We illustrate that otherwise, i.e. if these data-collection processes are not aligned with the actuarial (dependence) model, naïve data collection necessarily leads to a dangerous underestimation of accumulation risk. We illustrate the detrimental effects on the assessment of the dependence structure and portfolio risk by using a simple mathematical model for dependence through common vulnerabilities. The study concludes by highlighting the practical implications for insurers. Measuring and mitigating biases in motor insurance pricing The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches. Credibility theory based on winsorizing The classical Bühlmann credibility model has been widely applied to premium estimation for group insurance contracts and other insurance types. In this paper, we develop a robust Bühlmann credibility model using the winsorized version of loss data, also known as the winsorized mean (a robust alternative to the traditional individual mean). This approach assumes that the observed sample data come from a contaminated underlying model with a small percentage of contaminated sample data. This framework provides explicit formulas for the structural parameters in credibility estimation for scale-shape distribution families, location-scale distribution families, and their variants, commonly used in insurance risk modeling. Using the theory of *L*-estimators (different from the influence function approach), we derive the asymptotic properties of the proposed method and validate them through a comprehensive simulation study, comparing their performance to credibility based on the trimmed mean. By varying the winsorizing/trimming thresholds in several parametric models, we find that all structural parameters derived from the winsorized approach are less volatile than those from the trimmed approach. Using the winsorized mean as a robust risk measure can reduce the influence of parametric loss assumptions on credibility estimation. Additionally, we discuss non-parametric estimations in credibility. Finally, a numerical illustration from the Wisconsin Local Government Property Insurance Fund indicates that the proposed robust credibility approach mitigates the impact of model mis-specification and captures the risk behavior of loss data from a broader perspective.
The seminar will be opened by Stephane Loisel and closed by Hansjoerg Albrecher. Network analytics for insurance fraud detection: a critical case study There has been an increasing interest in fraud detection methods, driven by new regulations and by the financial losses linked to fraud. One of the state-of-the-art methods to fight fraud is network analytics. Network analytics leverages the interactions between different entities to detect complex patterns that are indicative of fraud. However, network analytics has only recently been applied to fraud detection in the actuarial literature. Although it shows much potential, many network methods are not yet applied. This paper extends the literature in two main ways. First, we review and apply multiple methods in the context of insurance fraud and assess their predictive power against each other. Second, we analyse the added value of network features over intrinsic features to detect fraud. We conclude that (1) complex methods do not necessarily outperform basic network features, and that (2) network analytics helps to detect different fraud patterns, compared to models trained on claim-specific features alone. An empirical study of profit and loss allocations The profit and loss (P L) attribution for each business year into different risk factors (e.g., interest rates, credit spreads, foreign exchange rate etc.) is a regulatory requirement, e.g., under Solvency 2. Three different decomposition principles are prevalent: one-at-a-time (OAT), sequential updating (SU) and average sequential updating (ASU) decompositions. In this research, using financial market data from 2003 to 2022, we demonstrate that the OAT decomposition can generate significant unexplained P L and that the SU decompositions depends significantly on the order or labeling of the risk factors. On the basis of an investment in a foreign stock, we further explain that the SU decomposition is not able to identify all relevant risk factors. This potentially effects the hedging strategy of the portfolio manager. In conclusion, we suggest to use the ASU decomposition in practice. Efficient simulation and valuation of equity-indexed annuities under a two-factor G2++ model Equity-indexed annuities (EIAs) with investment guarantees are pension products sensitive to changes in the interest rate environment. A flexible and common choice for modelling this risk factor is a Hull–White model in its G2++ variant. We investigate the valuation of EIAs in this model setting and extend the literature by introducing a more efficient framework for Monte-Carlo simulation. In addition, we build on previous work by adapting an approach based on scenario matrices to a two-factor G2++ model. This method does not rely on simulations or on Fourier transformations. In numerical studies, we demonstrate its fast convergence and the limitations of techniques relying on the independence of annual returns and the central limit theorem. Robust asymptotic insurance-finance arbitrage This paper studies the valuation of insurance contracts linked to financial markets, for example through interest rates or in equity-linked insurance products. We build upon the concept of insurance-finance arbitrage as introduced by Artzner et al. (Math Financ, 2024), extending their work by incorporating model uncertainty. This is achieved by introducing statistical uncertainty in the underlying dynamics to be represented by a set of priors *P*. Within this framework we propose the notion of *robust asymptotic insurance-finance arbitrage* (RIFA) and characterize the absence of such strategies in terms of the new concept of *QP*-evaluations. This nonlinear two-step evaluation ensures absence of RIFA. Moreover, it dominates all two-step evaluations, as long as we agree on the set of priors . Our analysis highlights the role of *QP*-evaluations in terms of showing that all two-step evaluations are free of RIFA. Furthermore, we introduce a doubly stochastic model to address uncertainty for surrender and survival, utilizing copulas to define conditional dependence. This setting illustrates how the *QP*-evaluation can be applied for the pricing of hybrid insurance products, highlighting the flexibility and potential of the proposed approach. On duration effects in non-life insurance pricing The paper discusses duration effects on the consistency of mean parameter and dispersion parameter estimators in exponential dispersion families (EDFs) that are the standard models used for non-life insurance pricing. Focus is on the standard generalised linear model assumptions where both the mean and variance, conditional on duration, are linear functions in terms of duration. We derive simple convergence results that highlight consequences when the linear conditional moment assumptions are not satisfied. These results illustrate that: (i) the resulting mean estimators always have a relevant asymptotic interpretation in terms of the duration adjusted actuarially fair premium—a premium that only agrees with the standard actuarial premium using a duration equal to one, given that the expected value is linear in the duration; (ii) deviance based estimators of the dispersion parameter in an EDF should be avoided in favour of Pearson estimators; (iii) unless the linear moment assumptions are satisfied, consistency of dispersion and plug-in variance estimators can not be guaranteed and may result in spurious over-dispersion. The results provide explicit conditions on the underlying data generating process that will lead to spurious over-dispersion that can be used for model checking. This is illustrated based on real insurance data, where it is concluded that the linear moment assumptions are violated, which results in non-negligible spurious over-dispersion. Enhancing actuarial non-life pricing models via transformers Currently, there is a lot of research in the field of neural networks for non-life insurance pricing. The usual goal is to improve the predictive power of actuarial pricing and behavioral models via neural networks while building upon the generalized linear model, which is the current industry standard. Our paper contributes to this current journey via novel methods to enhance actuarial non-life models with transformer models for tabular data. We build here upon the foundation laid out by the combined actuarial neural network as well as the localGLMnet and enhance those models via the feature tokenizer transformer. The manuscript demonstrates the performance of the proposed methods on a real-world claim frequency dataset and compares them with several benchmark models such as generalized linear models, feed-forward neural networks, combined actuarial neural networks, LocalGLMnet, and the pure feature tokenizer transformer. The paper shows that the new methods can achieve better results than the benchmark models while preserving the structure of the underlying actuarial models, thereby inheriting and retaining their advantages. The paper also discusses the practical implications and challenges of applying transformer models in actuarial settings. Bayesian credibility model with heavy tail random variables: calibration of the prior and application to natural disasters and cyber insurance The Bayesian credibility approach is a method for evaluating a certain risk of a segment of a portfolio (such as policyholder or category of policyholders) by compensating for the lack of historical data through the use of a prior distribution. This prior distribution can be thought as a preliminary expertise, that gathers information on the target distribution. This paper describes a particular Bayesian credibility model that is well-suited for situations where collective data are available to compute the prior, and when the distribution of the variables are heavy-tailed. The credibility model we consider aims to obtain a heavy-tailed distribution (namely a Generalized Pareto distribution) at a collective level and provides a closed formula to compute the severity part of the credibility premium at an individual level. Two cases of application are presented: one related to natural disasters and the other to cyber insurance. In the former, a large database on flood events is used as the collective information to define the prior, which is then combined with individual observations at a city level. In the latter, a classical database on data leaks is used to fit a model for the volume of data exposed during a cyber incident, while the historical data on a given firm is taken into account to consider individual experience. Is accumulation risk in cyber methodically underestimated? Many insurers have started to underwrite cyber in recent years. In parallel, they developed their first actuarial models to cope with this new type of risk. On the portfolio level, two major challenges hereby are the adequate modelling of the dependence structure among cyber losses and the lack of suitable data based on which the model is calibrated. The purpose of this article is to highlight the importance of taking a holistic approach to cyber. In particular, we argue that actuarial modelling should not be viewed stand-alone, but rather as an integral part of an interconnected value chain with other processes such as cyber-risk assessment and cyber-claims settlement. We illustrate that otherwise, i.e. if these data-collection processes are not aligned with the actuarial (dependence) model, naïve data collection necessarily leads to a dangerous underestimation of accumulation risk. We illustrate the detrimental effects on the assessment of the dependence structure and portfolio risk by using a simple mathematical model for dependence through common vulnerabilities. The study concludes by highlighting the practical implications for insurers. Measuring and mitigating biases in motor insurance pricing The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches. Credibility theory based on winsorizing The classical Bühlmann credibility model has been widely applied to premium estimation for group insurance contracts and other insurance types. In this paper, we develop a robust Bühlmann credibility model using the winsorized version of loss data, also known as the winsorized mean (a robust alternative to the traditional individual mean). This approach assumes that the observed sample data come from a contaminated underlying model with a small percentage of contaminated sample data. This framework provides explicit formulas for the structural parameters in credibility estimation for scale-shape distribution families, location-scale distribution families, and their variants, commonly used in insurance risk modeling. Using the theory of *L*-estimators (different from the influence function approach), we derive the asymptotic properties of the proposed method and validate them through a comprehensive simulation study, comparing their performance to credibility based on the trimmed mean. By varying the winsorizing/trimming thresholds in several parametric models, we find that all structural parameters derived from the winsorized approach are less volatile than those from the trimmed approach. Using the winsorized mean as a robust risk measure can reduce the influence of parametric loss assumptions on credibility estimation. Additionally, we discuss non-parametric estimations in credibility. Finally, a numerical illustration from the Wisconsin Local Government Property Insurance Fund indicates that the proposed robust credibility approach mitigates the impact of model mis-specification and captures the risk behavior of loss data from a broader perspective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.