2010
DOI: 10.1016/j.jml.2009.12.001
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure

Abstract: a b s t r a c tSigned languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. According to the Possible Word Constraint (PWC), listeners segment speech so as to avoid impossible words in the input. We argue here that the PWC is a modality-general principle. Deaf signers of Briti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
45
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
5
4

Relationship

4
5

Authors

Journals

citations
Cited by 57 publications
(50 citation statements)
references
References 42 publications
5
45
0
Order By: Relevance
“…They also extend our initial findings on sign segmentation, which suggested that language comprehension is guided by modality-general principles (Orfanidou et al, 2010). This is not to say that modality-specific mechanisms do not play a role in sign segmentation.…”
Section: Resultssupporting
confidence: 81%
See 1 more Smart Citation
“…They also extend our initial findings on sign segmentation, which suggested that language comprehension is guided by modality-general principles (Orfanidou et al, 2010). This is not to say that modality-specific mechanisms do not play a role in sign segmentation.…”
Section: Resultssupporting
confidence: 81%
“…This was indeed the initial intuition of the third and fourth authors (native and fluent BSL signers respectively). Furthermore, the fact that lexical viability constraints appear to be used in BSL segmentation in a modality-general way (Orfanidou et al, 2010) does not entail that transitions will be treated similarly.…”
Section: Predictionsmentioning
confidence: 99%
“…About 100 non-signs were generated by deaf native BSL signers. Most of these non-signs had previously been used in behavioural studies (Orfanidou, Adam, McQueen & Morgan, 2009;Orfanidou, Adam, Morgan & McQueen, 2010), but additional non-signs were created specifically for the current study. The non-signs were constructed so as to violate phonological rules in BSL, and therefore were not phonologically well-formed (i.e.…”
Section: Methods Participantsmentioning
confidence: 99%
“…Nonsigns were created by deaf native signers using a range of handshapes, locations, and movement patterns. Most of these nonsigns had previously been used in behavioral studies (Orfanidou, Adam, Morgan, & McQueen, 2010;Orfanidou et al, 2009); an additional set was created specifically for the current study. All nonsigns violated phonotactic rules of BSL and SSL or were made of nonoccurring combinations of parameters, including (a) two active hands performing symmetrical movements but with different handshapes; (b) compound-type nonsigns having two locations on the body but with movement from the lower location to the higher location (instead of going from the higher to the lower location 1 ); (c) nonoccurring or unusual points of contact on the signer's body (e.g., occluding the signer's eye or the inner side of the upper arm); (d) nonoccurring or unusual points of contact between the signer's hand and the location (e.g., handshape with the index and middle finger extended, but contact only between the middle finger and the body); nonoccurring handshapes.…”
Section: Stimulimentioning
confidence: 99%