The introduction of artificial intelligence (AI) capabilities in
business applications provides significant benefits but requires
organizations to manage critical risks of AI ethical consequences. We
survey a range of large organizations on their use of enterprise risk
management (ERM) processes and toolsets to predict and control the
ethical risks of AI. Four serious gaps in current ERM systems are
identified from analyses of the survey results: (1) AI ethical
principles do not translate effectively to ethical practices; (2)
Real-time monitoring of AI ethical risks is needed; (3) ERM systems
emphasize economic not ethical risks; and (4) When ethical risks are
identified, no solutions are readily at hand. To address these gaps, we
propose a proactive approach to manage ethical risks by extending
current ERM frameworks. An enhanced ERM (e-ERM) framework is designed
and evaluated by subject matter expert focus groups. We conclude with
observations and future research directions on the need for more
aggressive pro-ethical management oversight as organizations move to
ubiquitous use of AI-driven business applications.