2015
DOI: 10.1075/lab.5.3.03han
|View full text |Cite
|
Sign up to set email alerts
|

Morphological variation in the speech of Frisian-Dutch bilinguals

Abstract: In standard Dutch, the plural suffix -enis homographic and homophonic with the linking suffix -en(boek+en“books”,boek+en+kast“bookcase”), both being pronounced as schwa. In Frisian, there is neither homography nor homophony (boek+en“books”, pronounced with syllabic nasal;boek+e+kast“bookcase”, pronounced with a linking schwa). Seeing that many areas of Frisian grammar are subject to interference from Dutch, we investigated whether Frisian-Dutch bilinguals exhibit interference from Dutch with respect to the lin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Using a small number of trained, independent annotators is considered superior to a larger number of untrained annotators (Bhardwaj & Ide, 2010), in-line with other fine-grained linguistic annotations and transcriptions (e.g. Crossley et al 2015; Hanssen et al, 2015). To assess inter-annotator reliability, simple agreement rates (e.g.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Using a small number of trained, independent annotators is considered superior to a larger number of untrained annotators (Bhardwaj & Ide, 2010), in-line with other fine-grained linguistic annotations and transcriptions (e.g. Crossley et al 2015; Hanssen et al, 2015). To assess inter-annotator reliability, simple agreement rates (e.g.…”
Section: Methodsmentioning
confidence: 99%
“…Annotator 3 acted as an adjudicator for the items for which there was disagreement between the original annotators. Using a small number of trained, independent annotators is considered superior to a larger number of untrained annotators (Bhardwaj & Ide, 2010), in-line with other fine-grained linguistic annotations and transcriptions (e.g., Crossley et al 2015;Hanssen et al, 2015). To assess inter-annotator reliability, simple agreement rates (e.g., Hovy et al, 2006) were calculated rather than the commonly used Kappa.…”
Section: Annotationsmentioning
confidence: 99%