2011
DOI: 10.1017/s0890060411000060
|View full text |Cite
|
Sign up to set email alerts
|

Using speech to identify gesture pen strokes in collaborative, multimodal device descriptions

Abstract: One challenge in building collaborative design tools that use speech and sketch input is distinguishing gesture pen strokes from those representing device structure, that is, object strokes. In previous work, we developed a gesture/object classifier that uses features computed from the pen strokes and the speech aligned with them. Experiments indicated that the speech features were the most important for distinguishing gestures, thus indicating the critical importance of the speech–sketch alignment. Consequent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
2
0
1

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 44 publications
0
2
0
1
Order By: Relevance
“…The system proposed the use of commands such as “Create a blue square there” allowing users to employ vague language and use gestures for disambiguation. Speech has been used alongside gesture pen strokes as demonstrated in Herold and Stahovich's study (2011) in AIEDAM's special issue on the Role of Gesture in Designing. Recent studies using multi-modal input for CAD modeling include the studies of Menegotto (2015) who integrated speech with AutoCAD and Nanjundaswamy et al .…”
Section: Introductionmentioning
confidence: 99%
“…The system proposed the use of commands such as “Create a blue square there” allowing users to employ vague language and use gestures for disambiguation. Speech has been used alongside gesture pen strokes as demonstrated in Herold and Stahovich's study (2011) in AIEDAM's special issue on the Role of Gesture in Designing. Recent studies using multi-modal input for CAD modeling include the studies of Menegotto (2015) who integrated speech with AutoCAD and Nanjundaswamy et al .…”
Section: Introductionmentioning
confidence: 99%
“…Recentes interfaces multimodais reconhecem até dois modos de interação em diferentes áreas de interesse. Seja em combinações de pens e fala [9], [10], fala e gestos [11], [12], multitoque e objetos tangíveis [13], [14].…”
Section: Introductionunclassified
“…Tang (1989, 1991; Tang & Leifer, 1988), Bly (1988), Minneman (1991), and Neilson and Lee (1994) have observed that designers use speech, sketches, and gestures in combination, using each mode to explain and disambiguate the others. To fully understand a communication act the interaction of different media needs to be analyzed (see, e.g., Herold & Stahovich, 2011). Studies of solitary engineering sketching (Pache, 2001) have seen a wide variety of different sketching behavior and ability, with evidence for the reinterpretation of ambiguous notation in only a small number of cases.…”
Section: Introductionmentioning
confidence: 99%