2020
DOI: 10.14745/ccdr.v46i06a03
|View full text |Cite
|
Sign up to set email alerts
|

A call for an ethical framework when using social media data for artificial intelligence applications in public health research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…For example, there is a need to balance between the benefits of having large quantities of granular information for analysis and the need to ensure individuals cannot be (re)identified. This is particularly true with AI methods, given the large quantity of information that is usually required to train the model (( 54 , 57 , 62 , 63 )). In the case of digital data, which may be publicly available, but where permission to use for surveillance purposes has not been acquired, it is not clear how/whether informed consent can or needs to be obtained.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, there is a need to balance between the benefits of having large quantities of granular information for analysis and the need to ensure individuals cannot be (re)identified. This is particularly true with AI methods, given the large quantity of information that is usually required to train the model (( 54 , 57 , 62 , 63 )). In the case of digital data, which may be publicly available, but where permission to use for surveillance purposes has not been acquired, it is not clear how/whether informed consent can or needs to be obtained.…”
Section: Discussionmentioning
confidence: 99%
“…In the case of digital data, which may be publicly available, but where permission to use for surveillance purposes has not been acquired, it is not clear how/whether informed consent can or needs to be obtained. Particular care needs to be taken to ensure that data are anonymized and confidential information is not revealed (( 63 )). Protection of digital data and transparency in how and what data is acquired, stored, and used are key to maintaining public trust and ensuring the sustainability of these systems (( 57 , 64 )), and thus progress towards digital data governance is needed to fully operationalize these data sources.…”
Section: Discussionmentioning
confidence: 99%
“…Given that the content often examined in psychological research is of a sensitive nature (eg, mental health issues and personal experiences), it may be particularly relevant to consider the ethical implications of using publicly available data (eg, social media), which might be linked to a person’s identity. We encourage researchers to consult ethics boards when determining whether approval is needed to use such data, even if it is publicly available [ 121 , 122 ]. Furthermore, social media data can be more prone to grammatical errors and increased ambiguity (eg, owing to spelling errors and slang) compared with scientific literature and formal documentation and may require more in-depth preprocessing depending on the nature of the research question.…”
Section: Discussionmentioning
confidence: 99%
“…The use of wearable sensors raises several legal, ethical, and cultural issues associated with collecting, storing, and analysing these data. Issues include informed consent, privacy, anonymisation and balancing these issues with the benefits of using big data for the common good [60]. Digital technologies give users the option to control their data by allowing or revoking access to their data by opting in or out.…”
Section: Strengths and Limitationsmentioning
confidence: 99%