1995
DOI: 10.1109/93.482296
|View full text |Cite
|
Sign up to set email alerts
|

Teaching communication skills to hearing-impaired children

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
1

Year Published

1997
1997
2012
2012

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 8 publications
0
6
0
1
Order By: Relevance
“…One of the first literatures that discussed the feasibility of VR-assisted learning for hearing impaired children was the article by Alonso et al in 1995 [14]. It discussed several multimedia technologies and frameworks that could improve the communication skills of hearing impaired children.…”
Section: Related Workmentioning
confidence: 98%
“…One of the first literatures that discussed the feasibility of VR-assisted learning for hearing impaired children was the article by Alonso et al in 1995 [14]. It discussed several multimedia technologies and frameworks that could improve the communication skills of hearing impaired children.…”
Section: Related Workmentioning
confidence: 98%
“…Mehida (Alonso et al 1995) is an intelligent multimedia system for deaf or hearing-impaired children designed to assist them in acquiring and developing communication skills. It covers the following types of communicationfinger spelling (representing the letters of the alphabet using the fingers), gestures or sign languages, lip reading (understanding spoken language through observing lip motion), and voice recognition.…”
Section: Related Systems For Hearing-impaired Peoplementioning
confidence: 99%
“…As speech processing technology developed in the 1980s, researchers attempted to integrate speech processing technology into the AAC system in order to enable deaf individual's control over the technology. Multimedia technology has recently also been applied to the AAC system, making it increasingly user-friendly [3]. Solina [4] also developed a dynamic signlanguage synthesis system by concatenating the signlanguage video clips at two cut points according to a DIFF function, which considers the relations of palm locations between two concatenated video clips.…”
Section: Introductionmentioning
confidence: 99%