2011
DOI: 10.17562/pb-43-9
|View full text |Cite
|
Sign up to set email alerts
|

External Sandhi and its Relevance to Syntactic Treebanking

Abstract: External sandhi is a linguistic phenomenon which refers to a set of sound changes that occur at word boundaries. These changes are similar to phonological processes such as assimilation and fusion when they apply at the level of prosody, such as in connected speech. External sandhi formation can be orthographically reflected in some languages. External sandhi formation in such languages, causes the occurrence of forms which are morphologically unanalyzable, thus posing a problem for all kind of NLP application… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2012
2012
2015
2015

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…But as long as the linguistic integrity of the analysis is maintained, this will not be a disadvantage-after all, most treebanks are overwhelmingly used to develop high performance automatic parsers. For the Telugu dependency treebank Kolachina et al (2011) manually annotated the phenomenon of external`sandhi'. A transition-based dependency parser trained on this modied treebank performed better than the one trained on the old version.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…But as long as the linguistic integrity of the analysis is maintained, this will not be a disadvantage-after all, most treebanks are overwhelmingly used to develop high performance automatic parsers. For the Telugu dependency treebank Kolachina et al (2011) manually annotated the phenomenon of external`sandhi'. A transition-based dependency parser trained on this modied treebank performed better than the one trained on the old version.…”
Section: Discussionmentioning
confidence: 99%
“…We understand that doing this is not always trivial; nevertheless, we have tried to focus only on those errors that we thought are due to lack of robust features or to dicult to learn structures. Our work is certainly not without precedent; research in the dependency parsing literature related to feature optimization (Ambati et al, 2010a), (Seeker and Kuhn, 2011), lexicalization (Eryigit et al, 2008), (Kolachina et al, 2011), use of semantics (Bharati et al, 2008), (Ambati et al, 2009), etc. have tried out dierent types of language specic characteristics and explored ways in which they should be used to inuence the parser performance.…”
Section: Introductionmentioning
confidence: 98%