2021
DOI: 10.2139/ssrn.3882296
|View full text |Cite
|
Sign up to set email alerts
|

Eye on Developments in Artificial Intelligence and Children's Rights: Artificial Intelligence in Education (AIEd), EdTech, Surveillance, and Harmful Content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…The second, harms modeling is similar to threat modeling and requires a developer to think through and document the harms a system or technology could feasibly inflict upon society, ideally in collaboration with other stakeholders [44]. To help identify harms Microsoft provide a template that outlines 10 types of harm: (1) physical or infrastructure damage; (2) emotional or psychological distress; (3) opportunity loss, which involves limiting access to resources or services; (4) economic loss, which is similar to opportunity loss but concerned with access to financial resources and services specifically; (5) dignity loss, which involves interfering with the exchange of honor and respect; (6) liberty loss, which involves infringing legal rights or amplifying existing biases in social systems; (7) privacy loss; (8) environmental impact; (9) manipulation, which involves creating highly personalized and manipulative experiences that ultimately undermine trust; (10) social detriment, which otherwise refers to ways a technology could impact communities and social structures [44]. They also suggest considering how acutely an individual or group would be impacted by each type of harm (severity), how broadly the impact would be experienced (scale), how likely it is that the harm would occur (probability), and how often the harm could arise (frequency), to help evaluate the overall landscape and plan accordingly [44].…”
Section: The Microsoft Responsible Innovation Toolkitmentioning
confidence: 99%
See 1 more Smart Citation
“…The second, harms modeling is similar to threat modeling and requires a developer to think through and document the harms a system or technology could feasibly inflict upon society, ideally in collaboration with other stakeholders [44]. To help identify harms Microsoft provide a template that outlines 10 types of harm: (1) physical or infrastructure damage; (2) emotional or psychological distress; (3) opportunity loss, which involves limiting access to resources or services; (4) economic loss, which is similar to opportunity loss but concerned with access to financial resources and services specifically; (5) dignity loss, which involves interfering with the exchange of honor and respect; (6) liberty loss, which involves infringing legal rights or amplifying existing biases in social systems; (7) privacy loss; (8) environmental impact; (9) manipulation, which involves creating highly personalized and manipulative experiences that ultimately undermine trust; (10) social detriment, which otherwise refers to ways a technology could impact communities and social structures [44]. They also suggest considering how acutely an individual or group would be impacted by each type of harm (severity), how broadly the impact would be experienced (scale), how likely it is that the harm would occur (probability), and how often the harm could arise (frequency), to help evaluate the overall landscape and plan accordingly [44].…”
Section: The Microsoft Responsible Innovation Toolkitmentioning
confidence: 99%
“…In phase 1 of the project, beneficiaries participate in a series of educational workshops. Each workshop is 3 hours long and there are 8 workshops in total: (1) programme induction, (2) market opportunities, (3) exploring AI technology, (4) innovation lifecycle, (5) rapid innovation techniques, (6) prototyping, (7) ethics, and (8) project review. After phase 1, eligible beneficiaries progress to phase 2 where they receive technical assistance towards an innovative product or service linked to AI.…”
Section: Introductionmentioning
confidence: 99%