We explore some of the risks related to Artificial Intelligence (AI) from an actuarial perspective based on research from a transregional industry focus group. We aim to define the key gaps and challenges faced when implementing and utilising modern modelling techniques within traditional actuarial tasks from a risk perspective and in the context of professional standards and regulations. We explore best practice guidelines to attempt to define an ideal approach and propose potential next steps to help reach the ideal approach. We aim to focus on the considerations, initially from a traditional actuarial perspective and then, if relevant, consider some implications for non-traditional actuarial work, by way of examples. The examples are not intended to be exhaustive. The group considered potential issues and challenges of using AI, related to the following key themes:
Ethical
○
Bias, fairness, and discrimination
○
Individualisation of risk assessment
○
Public interest
Professional
○
Interpretability and explainability
○
Transparency, reproducibility, and replicability
○
Validation and governance
Lack of relevant skills available
Wider themes
This paper aims to provide observations that could help inform industry and professional guidelines or discussion or to support industry practitioners. It is not intended to replace current regulation, actuarial standards, or guidelines. The paper is aimed at an actuarial and insurance technical audience, specifically those who are utilising or developing AI, and actuarial industry bodies.