2021
DOI: 10.1177/00207209211005271
|View full text |Cite|
|
Sign up to set email alerts
|

RETRACTED: User acceptance of machine learning models – Integrating several important external variables with technology acceptance model

Abstract: Machine learning models enable data-based decision-making in many areas and have attracted extensive attention. By testing the factors that influence the adoption of machine learning models, this study expands the scope of machine learning models in information technology adoption research. Based on the machine learning background and Technology Acceptance Model, this study integrates the necessary external variables, proposes a research model, and further verifies the validity of the model through the survey … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 42 publications
4
4
0
Order By: Relevance
“…Finally, the results from the qualitative feedback session with grief researchers and practitioners on our explainable AI UI mockup had highlighted several interesting insights into the use of machine learning models in grief care. While earlier works which aims to developed models to diagnose conditions in mental health tended to emphasize on perfor-mance [99], our findings echoed those from more recent studies which sought to put such models into practice, showing how the explainability of the model could be equally essential in enhancing user acceptance of the system [88], [118]. In the context of PGD in particular, participants in the interview mentioned that in practice, diagnosing whether a user has a mental health condition is often subjective and while they do use the results from diagnostic tools, it is often used only as a reference and practitioners tend to look into other factors such as symptoms or risk factors as well.…”
Section: Discussionsupporting
confidence: 69%
See 1 more Smart Citation
“…Finally, the results from the qualitative feedback session with grief researchers and practitioners on our explainable AI UI mockup had highlighted several interesting insights into the use of machine learning models in grief care. While earlier works which aims to developed models to diagnose conditions in mental health tended to emphasize on perfor-mance [99], our findings echoed those from more recent studies which sought to put such models into practice, showing how the explainability of the model could be equally essential in enhancing user acceptance of the system [88], [118]. In the context of PGD in particular, participants in the interview mentioned that in practice, diagnosing whether a user has a mental health condition is often subjective and while they do use the results from diagnostic tools, it is often used only as a reference and practitioners tend to look into other factors such as symptoms or risk factors as well.…”
Section: Discussionsupporting
confidence: 69%
“…Previous studies demonstrated that the accuracy of a AI model played a key role in the level of trust users had in the system (which in turn significantly influenced their acceptance of the system) [87], [88]. Hence, we were uncertain whether the performance of our model was sufficient for the practitioners to adopt our models in their practice in a clinically meaningful way.…”
Section: A Interview Resultsmentioning
confidence: 97%
“…Furthermore, the third hypothesis is supported in line with Venkatesh and Davis (2000), Molobi, Kabiraj, and Siddik (2020), and Zhang, Wang, and Li (2021) who concluded that perceived ease of use has an important influence on shaping perceived usefulness. In other words, individuals who find it easy to use new technology will automatically form their perception of the benefits that will be received when adopting the system.…”
Section: Discussionsupporting
confidence: 58%
“…In other words, a person inserts these beliefs into his belief structure. This internalization is equivalent to what Deutsch and Gerard (1955) call informational social influence (as opposed to normative), which is defined as the influence to receive information from others as evidence of reality (Ali, Gongbing, & Mehreen, 2018;Molobi, Kabiraj, & Siddik, 2020;Tao et al, 2020;Zhang, Wang, & Li, 2021). In the current context, if a boss or coworker suggests that a particular system might be helpful, then someone might believe that the system is beneficial and thus form the intention to use it.…”
Section: Discussionmentioning
confidence: 99%
“…erefore, numerous studies have applied the technology acceptance model (TAM) to the fields of information, finance, consumption, research and development, etc. and have received remarkable outcomes [22][23][24][25][26].…”
Section: Technology Acceptance Modelmentioning
confidence: 99%