2023
DOI: 10.1037/dev0001524
|View full text |Cite
|
Sign up to set email alerts
|

The minds of machines: Children's beliefs about the experiences, thoughts, and morals of familiar interactive technologies.

Abstract: Children are developing alongside interactive technologies that can move, talk, and act like agents, but it is unclear if children's beliefs about the agency of these household technologies are similar to their beliefs about advanced, humanoid robots used in lab research. This study investigated 4–11-year-old children's (N = 127, Mage = 7.50, SDage = 2.27, 53% females, 75% White; from the Northeastern United States) beliefs about the mental, physical, emotional, and moral features of two familiar technologies … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 94 publications
0
9
0
Order By: Relevance
“…According to a recent meta-analytic study of human–robot interaction (Hancock et al, 2021), among human factors (abilities/characteristics, including demographics), robot factors (performances/attributes), and contextual factors (team/task types), human culture and robot reliability are the only two robust predictors of human trust in robots across correlational and pairwise meta-analyses. Specifically, individuals from Asian cultures and the United States placed greater trust in robots than individuals from European cultures, and such trust is positively correlated with robot reliability or consistent ability, which differs in reality and perception across robotic agents (Flanagan et al, 2023). While cultural differences are multifaceted, the one described above may be related to the fact that Europeans, such as German individuals, are more concerned with robots’ negative influences or social implications than individuals from Japan, China, or United States (Bröhl et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…According to a recent meta-analytic study of human–robot interaction (Hancock et al, 2021), among human factors (abilities/characteristics, including demographics), robot factors (performances/attributes), and contextual factors (team/task types), human culture and robot reliability are the only two robust predictors of human trust in robots across correlational and pairwise meta-analyses. Specifically, individuals from Asian cultures and the United States placed greater trust in robots than individuals from European cultures, and such trust is positively correlated with robot reliability or consistent ability, which differs in reality and perception across robotic agents (Flanagan et al, 2023). While cultural differences are multifaceted, the one described above may be related to the fact that Europeans, such as German individuals, are more concerned with robots’ negative influences or social implications than individuals from Japan, China, or United States (Bröhl et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…According to this view, children's own knowledge of the physical and social world drives the development of their inferences about who to trust in selective learning, which in turn enables them to assess the reliability of information sources (see also Gweon, 2021). As children's exposure to technology increases with age (e.g., Flanagan et al, 2023), it is possible that the younger children in our study trusted the inaccurate human more than the inaccurate robot because they had prior knowledge and experience of how they can benefit from people for learning, despite past inaccuracy, but they might not yet have developed a similar mental model of robots as dependable sources of information. Thus, according to the rational inference hypothesis of selective learning development, as children grow older and encounter more diverse sources of information with varying levels of knowledge and usefulness, they refine their understanding of reliable information sources across different socially intelligent agents, such as toward both human and robot informants found in our study.…”
Section: Discussionmentioning
confidence: 99%
“…Conversely, some agents with a strong human resemblance are less anthropomorphized than agents with a moderate human resemblance [36]. In another study, children aged 4-11 attributed mental states to a moderately human-like NAO robot, similar to a non-human-like vocal assistant such as Alexa [37]. Finally, the same agent can be anthropomorphized or not depending on the interaction situation (which includes the way the agent is presented to the user, but also the characteristics of the users themselves) [25].…”
Section: A Contextless Model: the Mere Appearance Hypothesismentioning
confidence: 99%
“…Voice contributes to the anthropomorphism of the robot. Children aged 4-11 attribute as many mental states to a non-human-like agent with a human voice (ALEXA) as to a moderately human-like robot (NAO) [37]. Thus, adapting the voice, the length of the sentences, the speech rate according to the context of interaction, and the role occupied by the robot are important factors [120].…”
Section: A Human-like Voice Helps But It Is Not Enoughmentioning
confidence: 99%