International Conference on Multimodal Interaction 2022
DOI: 10.1145/3536220.3558070
|View full text |Cite
|
Sign up to set email alerts
|

An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…Through a shallow grid search, we obtained optimized results for all participants with a parameter setting of 500 trees, a maximum tree depth of 10, a minimum sample split of 3, and a minimum of 4 samples per leaf. Further, a combination of all features from the delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), low beta (12)(13)(14)(15)(16)(17)(18)(19)(20), and high beta (20-30 Hz) bands binned in 2 Hz led to the optimal performance for 4-12 Hz in the first classification task (ambient vs. distraction) and for 4-20 Hz in the second classification (distraction vs. hesitation), and 62 × 5 = 310 and 62 × 9 = 558 dimensional feature vectors, respectively.…”
Section: Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Through a shallow grid search, we obtained optimized results for all participants with a parameter setting of 500 trees, a maximum tree depth of 10, a minimum sample split of 3, and a minimum of 4 samples per leaf. Further, a combination of all features from the delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), low beta (12)(13)(14)(15)(16)(17)(18)(19)(20), and high beta (20-30 Hz) bands binned in 2 Hz led to the optimal performance for 4-12 Hz in the first classification task (ambient vs. distraction) and for 4-20 Hz in the second classification (distraction vs. hesitation), and 62 × 5 = 310 and 62 × 9 = 558 dimensional feature vectors, respectively.…”
Section: Classificationmentioning
confidence: 99%
“…The following study would not have been conducted without LabLinking, since it builds on the complementary expertise and equipment of two laboratories: the Medical Assistance Systems Group (MAS) at Bielefeld University with its rich expertise in social robotics based on robots such as Pepper, Nao, or Flobi [19][20][21], and the Cognitive Systems Lab (CSL) at University of Bremen with vast experience in biosignal-adaptive cognitive systems [22] based on multimodal biosignal acquisition [23] and processing using machine learning methods [24], including the recording and interpretation of spoken communication [25] and high-density EEG in the context of intelligent robots and systems [26].…”
Section: Introductionmentioning
confidence: 99%
“…Configurations of actions in this article follow the Behavior Markup Language (BML) ( Kopp et al, 2006 ) approach of an XML-based annotation for designing multimodal behaviors for robots. Like previous work ( Groß et al, 2022 ), the focus is mainly on behaviors for communication in a dialog but is not limited to these actions. • Speech : Processing the verbal voice output of a message.…”
Section: Risementioning
confidence: 99%
“…Robot Independence : The establishment of the fundamental structures enables the utilization of custom robots and actions. RISE builds on an existing system that constitutes its foundation ( Groß et al, 2022 ). Within this architecture, the most elemental actions of a robot are represented as Behavior Actions .…”
Section: Risementioning
confidence: 99%
See 1 more Smart Citation