2007
DOI: 10.1177/0278364907082612
|View full text |Cite
|
Sign up to set email alerts
|

A Dual Mode Human-Robot Teleoperation Interface Based on Airflow in the Aural Cavity

Abstract: Robot teleoperation systems have been limited in their utility due to the need for operator motion, lack of portability and limitation to singular input modalities. In this article, the design and construction of a dual-mode human-machine interface system for robot teleoperation addressing all these issues is presented. The interface is capable of directing robotic devices in response to tongue movement and/or speech without insertion of any device in the vicinity of the oral cavity. The interface is centered … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Consequently, tongue movements can be mapped, via ear pressure signals, to generate control signals for HMIs without inserting any device in the oral cavity [4]. We have also demonstrated that spoken words can be recognized from the ear pressure signals [3]. Therefore, speech control signals can also be mapped via ear pressure signals into HMI control signals.…”
Section: Single Channel Classification Examplementioning
confidence: 86%
See 2 more Smart Citations
“…Consequently, tongue movements can be mapped, via ear pressure signals, to generate control signals for HMIs without inserting any device in the oral cavity [4]. We have also demonstrated that spoken words can be recognized from the ear pressure signals [3]. Therefore, speech control signals can also be mapped via ear pressure signals into HMI control signals.…”
Section: Single Channel Classification Examplementioning
confidence: 86%
“…It is not unusual for the resulting dimension to be high especially for signals that are voluntarily generated by humans for human-machine-interface (HMI) control and communication applications. Examples of such signals include speech [1]- [3], ear pressure signals [3], [4], electromyographic signals [5], [6], electroencephalogram (EEG) and event-related potential (ERP) brainwaveforms [6]- [9], and gestures [10], [11]. Due to practical issues related to the data acquisition methods, lack of concentration, discomfort, and fatigue, it may not always be possible to collect enough reliable signals to exceed the dimension of the signal space.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Physically impaired individuals clearly may not have the luxury of such freedom of movement. Although several alternatives are currently under development (a brief survey may be found in [1]), no device is currently available today that definitively address all the needs of the patient community at large.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, we have introduced a non-intrusive tongue-movement HMI concept [1][2][3][4][5][6], and shown tongue movements within the oral cavity create unique pressure signals in the ear (dubbed tongue-movement-ear-pressure (TMEP) signals). We have further developed and implemented new pattern classification strategies that have accurately recognized TMEP signals with over 97% accuracy over a range of users [3], hence providing an unobtrusive, completely noninvasive method of controlling peripheral or assist mechanisms through tongue movement.…”
Section: Introductionmentioning
confidence: 99%