2020
DOI: 10.1177/0963662520965490
|View full text |Cite
|
Sign up to set email alerts
|

Population health AI researchers’ perceptions of the public portrayal of AI: A pilot study

Abstract: This article reports how 18 UK and Canadian population health artificial intelligence researchers in Higher Education Institutions perceive the use of artificial intelligence systems in their research, and how this compares with their perceptions about the media portrayal of artificial intelligence systems. This is triangulated with a small scoping analysis of how UK and Canadian news articles portray artificial intelligence systems associated with health research and care. Interviewees had concerns about what… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 58 publications
0
9
0
1
Order By: Relevance
“…Participants unanimously felt that such polarisation was unhelpful and in almost every case the result of hyped-up storytelling by the media, science fiction writers, or big tech companies who want to portray a particular view of a ‘good or bad use’ of technology, to attract (monetized) attention (Samuel et al 2021 ). Instead, experts felt stories about AI that were more responsible and nuanced were required for improved understanding of AI: I think narratives polarise and I think that’s unhelpful because for me the debate needs to happen in a way that allows us to acknowledge that in almost every instance that we’re going to be using these sorts of methods and technologies, there will be things that we conventionally think of as ethically good or bad, those things will be entangled still.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Participants unanimously felt that such polarisation was unhelpful and in almost every case the result of hyped-up storytelling by the media, science fiction writers, or big tech companies who want to portray a particular view of a ‘good or bad use’ of technology, to attract (monetized) attention (Samuel et al 2021 ). Instead, experts felt stories about AI that were more responsible and nuanced were required for improved understanding of AI: I think narratives polarise and I think that’s unhelpful because for me the debate needs to happen in a way that allows us to acknowledge that in almost every instance that we’re going to be using these sorts of methods and technologies, there will be things that we conventionally think of as ethically good or bad, those things will be entangled still.…”
Section: Resultsmentioning
confidence: 99%
“…Narrative plays an integral part of these visions, and they are shifting all the time (Bory 2019 ). Of course, what stories and narratives have in common is their tendency to be the subject of hype (Blom and Hansen 2015 ; Samuel et al 2021 ; Slota et al 2020 ). Hence, there is not only a requirement for new narratives, but also an increase in public understanding about the need to interrogate narrative features: who is telling the story, what is its genre, and what are their communicative purposes?…”
Section: Introductionmentioning
confidence: 99%
“…Interviewee 10 drew on an example of some AI software they developed to simplify the process of disseminating health information to patients, but which ended up inadvertently removing critical information. Given this, interviewees called for stakeholder education about the capabilities of, and uncertainties attached to such systems ( Samuel et al , 2020). Finally, interviewees’ raised concerns around questions of ownership, agency, safety, and responsibility ( Porter et al, 2018 ) (“ who owns the algorithm and who owns the data?…Who is responsible [if something goes wrong]” [interviewee 2]; “ what does accountability look like?” [interviewee 1]).…”
Section: Resultsmentioning
confidence: 99%
“…Each of the 234 articles was read in detail to ensure understanding of its context. Articles were coded as previously described (Samuel et al, 2021a). Articles noting benefits but omitting or only briefly mentioning contrasting views (for example, possible harms or technical challenges associated with the implementation of the app) were coded 'supportive'.…”
Section: Discussionmentioning
confidence: 99%