2023
DOI: 10.3390/e25020263
|View full text |Cite
|
Sign up to set email alerts
|

Turn-Taking Mechanisms in Imitative Interaction: Robotic Social Interaction Based on the Free Energy Principle

Abstract: This study explains how the leader-follower relationship and turn-taking could develop in a dyadic imitative interaction by conducting robotic simulation experiments based on the free energy principle. Our prior study showed that introducing a parameter during the model training phase can determine leader and follower roles for subsequent imitative interactions. The parameter is defined as w, the so-called meta-prior, and is a weighting factor used to regulate the complexity term versus the accuracy term when … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 63 publications
0
6
0
Order By: Relevance
“…For example, some studies focusing on hierarchical representations did not assume sequential data because of using a variational auto-encoder (78,85,86) and did not use stochastic dynamics in RNN (42). Although there is research investigating internal representation using PV-RNN, the previous studies used lower-order probability (e.g., target state and signal noise) and did not consider explicitly higher-order probabilistic variables such as transition bias (55)(56)(57)(58)(59)(60). The current result showing that artificial neural network models can acquire hierarchical Bayesian representations in a selforganizing manner is a crucial step to understanding underlying mechanisms for embedding the hierarchical Bayesian model into the brain system through developmental learning.…”
Section: Acquisition Of Hierarchical and Probabilistic Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, some studies focusing on hierarchical representations did not assume sequential data because of using a variational auto-encoder (78,85,86) and did not use stochastic dynamics in RNN (42). Although there is research investigating internal representation using PV-RNN, the previous studies used lower-order probability (e.g., target state and signal noise) and did not consider explicitly higher-order probabilistic variables such as transition bias (55)(56)(57)(58)(59)(60). The current result showing that artificial neural network models can acquire hierarchical Bayesian representations in a selforganizing manner is a crucial step to understanding underlying mechanisms for embedding the hierarchical Bayesian model into the brain system through developmental learning.…”
Section: Acquisition Of Hierarchical and Probabilistic Representationmentioning
confidence: 99%
“…Therefore, PV-RNN can be considered a powerful tool for investigating the Bayesian brain hypothesis. Indeed, PV-RNN was useful for modeling uncertainty estimations (55), goaloriented behavior (56), sensory attenuation (57), and social interaction (58)(59)(60).…”
Section: Introductionmentioning
confidence: 99%
“…For example, if you have experience with a particular object, your brain will use that experience to generate predictions about what the object should look like, and these predictions will rapidly adjust if the object changes in some way (e.g., if it moves or changes color). By rapidly updating its internal models of the world in this way, the brain can maintain a stable and accurate representation of the environment (Ahmadi & Tani, 2019;Wirkuttis et al, 2023).…”
Section: Predictive Codingmentioning
confidence: 99%
“…Active inference (AIf) (Friston et al, 2016(Friston et al, , 2017 has recently been combined with neural networks (a.k.a. deep AIf) to solve more challenging tasks including simulated decision problems (Ueltzhöffer, 2018;Millidge, 2020;Fountas et al, 2020;Mazzaglia et al, 2021) and planning with real robots (Ahmadi & Tani, 2019;QueiSSer et al, 2021;Matsumoto et al, 2022;Wirkuttis et al, 2023).…”
Section: Deep Active Inferencementioning
confidence: 99%
“…In particular, this paper introduces a new perspective of how first principles can be applied in systems that encompass natural and artificial intelligence. This new perspective goes beyond the state-of-the-art of formulations based on first principles and entropy dynamics [23][24][25][26].…”
Section: Introductionmentioning
confidence: 99%