2013
DOI: 10.1017/s0952675713000122
|View full text |Cite
|
Sign up to set email alerts
|

Visual intonation in two sign languages

Abstract: In a detailed comparison of the intonational systems of two unrelated languages, Israeli Sign Language and American Sign Language, we show certain similarities as well as differences in the distribution of several articulations of different parts of the face and motions of the head. Differences between the two languages are explained on the basis of pragmatic notions related to information structure, such as accessibility and contingency, providing novel evidence that the system is inherently intonational, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
39
0
4

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(46 citation statements)
references
References 63 publications
3
39
0
4
Order By: Relevance
“…They also found that phrasefinal signs in ISL were often enlarged and longer in duration than non-phrase final signs. These findings were later confirmed in a comparative study by Dachkovsky et al (2013), who observed holds, consistent changes in facial expression, and shifts in head position between topic and comment IP boundaries. In the same study, however, it was also found that ASL signers showed no indication of timing breaks between topic and comment constituents or any correlations between the constituents regarding their rate of production or their length and complexity.…”
Section: Phrase Positionsupporting
confidence: 54%
See 2 more Smart Citations
“…They also found that phrasefinal signs in ISL were often enlarged and longer in duration than non-phrase final signs. These findings were later confirmed in a comparative study by Dachkovsky et al (2013), who observed holds, consistent changes in facial expression, and shifts in head position between topic and comment IP boundaries. In the same study, however, it was also found that ASL signers showed no indication of timing breaks between topic and comment constituents or any correlations between the constituents regarding their rate of production or their length and complexity.…”
Section: Phrase Positionsupporting
confidence: 54%
“…This result runs counter to most sign and spoken language findings which indicate that phrase-final signs are longest (Klatt 1975;Liddell 1978;Perlmutter 1992;Turk & Shattuck-Hufnagel 2007;Sandler 1986Sandler , 1989Wightman et al 1992). This finding, however, may explain why Dachkovsky et al (2013) did not observe any timing or lengthening discrepancies between topic and comment constituents in ASL. It could be argued that since both constituents make up independent phrases, and since phrase-initial and phrase-final signs are shown here to have nearly identical lengthening statistically, the effect on the topic-final sign would be no different from that on the following comment-initial sign.…”
Section: Discussionmentioning
confidence: 78%
See 1 more Smart Citation
“…However, topics are normally accompanied by non-manual markers, such as raised eyebrows or head tilt (e.g. Dachkovsky 2013). Yet, we have not yet witnessed non-manual elements accompanying or pauses following the first constituent.…”
Section: Emerging Syntaxmentioning
confidence: 99%
“…That study also aims to identify the visual information signers rely on to determine utterance boundaries online, on the basis of linguistic annotations of the visible cues in this additional data set. On a par with previous work on spoken languages , we hypothesize that in addition to lexical content and syntax , phonetic and prosodic markers such as signing speed or height (Wilbur, 2009;Russell et al, 2011), as well as visual intonation on the f a c em a yp l a yar o l e ( Reilly et al, 1990;Nespor and Sandler, 1999;Fenlon et al, 2007;Dachkovsky and Sandler, 2009;Dachkovsky et al, 2013) in the online prediction of stroke-tostroke turn boundaries. This paper has centered on question-answer sequences within a relatively limited data set.…”
Section: Discussionmentioning
confidence: 57%