2001
DOI: 10.3758/bf03195384
|View full text |Cite
|
Sign up to set email alerts
|

SignStream: A tool for linguistic and computer vision research on visual-gestural language data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0
6

Year Published

2001
2001
2022
2022

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 63 publications
(36 citation statements)
references
References 23 publications
0
30
0
6
Order By: Relevance
“…Manual transcription of such video data is time-consuming, and machine vision assisted annotation would greatly improve efficiency. Head tracking and handshape recognition algorithms [99], and sign word boundary detection algorithms [83] have been applied for this purpose. .…”
Section: Introductionmentioning
confidence: 99%
“…Manual transcription of such video data is time-consuming, and machine vision assisted annotation would greatly improve efficiency. Head tracking and handshape recognition algorithms [99], and sign word boundary detection algorithms [83] have been applied for this purpose. .…”
Section: Introductionmentioning
confidence: 99%
“…This publicly available corpus, including 15 short narratives plus hundreds of additional elicited utterances, includes multiple synchronized views of the signing (generally 2 stereoscopic front views plus a side view and a close-up of the face), which have been linguistically annotated using SignStream TM [15,17] software, which enables identification of the start and end points of the manual and non-manual components of the signing. From this corpus we selected training and testing sets of 32 and 13 video clips, respectively, of isolated utterances, extracting the segments containing non-manual markers of the classes of interest.…”
Section: Resultsmentioning
confidence: 99%
“…The subjects were native ASL signers. The annotations were carried out using SignStream TM , a database program to facilitate the analysis of visual language data [17]. Of particular relevance here were the annotations of positions and movements of the head and eyebrows, as well as the English translations provided for each sentence.…”
Section: Methodsmentioning
confidence: 99%