Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems 2018
DOI: 10.1145/3220134.3225217
|View full text |Cite
|
Sign up to set email alerts
|

A Textual Domain Specific Language for Task Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 13 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…The work consists of a procedure to map human vocal requests (e.g., "I need an item") to common semantics, identifying who has to perform which action, and finally LoTLan primitives, which are then converted into plans for AGVs. Forbig et al [100] exploit their language CoTaL [101] to model interactive tasks between a humanoid robot and a stroke patient performing arm mobility recovery exercises. The resulting specification captures all phases needed for the exercise session, how the humanoid robot can detect whether the patient has completed an exercise or not and how to react accordingly.…”
Section: Dsls For Human-robot Interactionmentioning
confidence: 99%
“…The work consists of a procedure to map human vocal requests (e.g., "I need an item") to common semantics, identifying who has to perform which action, and finally LoTLan primitives, which are then converted into plans for AGVs. Forbig et al [100] exploit their language CoTaL [101] to model interactive tasks between a humanoid robot and a stroke patient performing arm mobility recovery exercises. The resulting specification captures all phases needed for the exercise session, how the humanoid robot can detect whether the patient has completed an exercise or not and how to react accordingly.…”
Section: Dsls For Human-robot Interactionmentioning
confidence: 99%
“…In this paragraph we want to model the task model for Pepper, the patient in DSL-CoTaL [4], that is based on features of Buchholz et. Al.…”
Section: Modelling Collaborationmentioning
confidence: 99%