2020
DOI: 10.1080/15332861.2020.1832817
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating If Trust and Personal Information Privacy Concerns Are Barriers to Using Health Insurance That Explicitly Utilizes AI

Abstract: Trust and privacy have emerged as significant concerns in online transactions. Sharing information on health is especially sensitive but it is necessary for purchasing and utilizing health insurance. Evidence shows that consumers are increasingly comfortable with technology in place of humans, but the expanding use of AI potentially changes this. This research explores whether trust and privacy concern are barriers to the adoption of AI in health insurance. Two scenarios are compared: The first scenario has li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 41 publications
(18 citation statements)
references
References 36 publications
(44 reference statements)
0
17
1
Order By: Relevance
“…(P31) This constitutes another boundary for PVA design and implementation. Still, it cannot be viewed in isolation from privacy and trust, considering Zarifis et al (2021) finding that trust is lower and privacy concerns are higher when the user can clearly recognize AI.…”
Section: Discussionmentioning
confidence: 99%
“…(P31) This constitutes another boundary for PVA design and implementation. Still, it cannot be viewed in isolation from privacy and trust, considering Zarifis et al (2021) finding that trust is lower and privacy concerns are higher when the user can clearly recognize AI.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, practitioners are encouraged to take promising design recommendations and adapt them into practice, but measure their effects on end users. For example, it is worthy to investigate whether AI system communication has the potential to alleviate trust issues that end users face with AI systems (Zarifis et al, 2020) or technostress (Tarafdar et al, 2020). Continuous measurement and feedback are important, as each system and use case are unique.…”
Section: Practical Implicationsmentioning
confidence: 99%
“…For example, ML approaches have been used to combat the COVID-19 pandemic through patient outcome prediction, risk assessment and predicting the disease spreading (Dogan et al, 2021), and are an integral component of recommendation systems that curate social media feeds and e-commerce (Batmaz et al, 2019). To reinforce public trust in AI-driven and AI-supported decision making, and to mitigate prejudices (Zarifis et al, 2020) it is pivotal to ensure the explainability of AI-made decisions to the end users of these systems (European Commission, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…We increasingly interact with AI through Personal Virtual Assistants (PVA), also referred to as chatbots. These can be convenient, effortless and effective but there are also challenges (Cheng et al, 2021 ; Zarifis et al, 2021 ). This paper focuses on the negative emotions towards them.…”
Section: Articles Of the Present Issuementioning
confidence: 99%