Proceedings of the Ninth Conference on European Chapter of the Association for Computational Linguistics - 1999
DOI: 10.3115/977035.977062
|View full text |Cite
|
Sign up to set email alerts
|

Japanese dependency structure analysis based on maximum entropy models

Abstract: This paper describes a dependency structure analysis of Japanese sentences based on the maximum entropy models. Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units. The dependency accuracy of our system is 87.2% using the Kyoto University corpus. We discuss the contribution of each feature set and the relationship between the number of training data and the accuracy.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
59
0

Year Published

2004
2004
2015
2015

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(62 citation statements)
references
References 5 publications
3
59
0
Order By: Relevance
“…And we also employ the flag of the beginning/ending bunsetsu of a sentence, the distance between the two bunsetsus, pause symbol and their combination features. They conform with features used in previous reports [18], [19]. <c> and <b> were handled as surface forms in SDA.…”
Section: Methodsmentioning
confidence: 78%
See 1 more Smart Citation
“…And we also employ the flag of the beginning/ending bunsetsu of a sentence, the distance between the two bunsetsus, pause symbol and their combination features. They conform with features used in previous reports [18], [19]. <c> and <b> were handled as surface forms in SDA.…”
Section: Methodsmentioning
confidence: 78%
“…Some dependency models for calculating dependency [18] and also have modeled a dependency link that takes context structures into consideration [20]. Kudo et al have proposed two types of modeling methods [19].…”
Section: Dependency Modelingmentioning
confidence: 99%
“…In the evaluation of dependency parsing, we obtained the dependency accuracy (the percentage of correctly analyzed dependencies out of all dependencies) and sentency accuracy (the percentage of the sentences in which all the dependencies are analyzed correctly), which were defined by Uchimoto et al (1999). We compared our method to Yoshida's method (Yoshida et al, 2014) and two conventional sequential methods.…”
Section: Outline Of Experimentsmentioning
confidence: 99%
“…This score has been used in many researches and can be computed in various ways, for example, by using machine learning methods such as Maximum Entropy Models [22], Support Vector Machines [23], and MIRA [24] or conditional probabilistic models [26], [33], to estimate the score from kinds of linguistic features related with the two words.…”
Section: The Scores Of the Edgesmentioning
confidence: 99%
“…Of course, a rich set of training data and accurate knowledge are crucial for this method. Various methods have been proposed for the learning part of this approach: learning actions of a deterministic parser [18], [19], learning similarity of tree structures [20], [21], and learning the scores of dependencies [22]- [24].…”
Section: Linesmentioning
confidence: 99%