2011
DOI: 10.4236/jilsa.2011.33015
|View full text |Cite
|
Sign up to set email alerts
|

Insertion of Ontological Knowledge to Improve Automatic Summarization Extraction Methods

Abstract: The vast availability of information sources has created a need for research on automatic summarization. Current methods perform either by extraction or abstraction. The extraction methods are interesting, because they are robust and independent of the language used. An extractive summary is obtained by selecting sentences of the original source based on information content. This selection can be automated using a classification function induced by a machine learning algorithm. This function classifies sentenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Some other summarisation methods employ ontologies to calculate similarity. According to the proposed method in Motta et al [16], a matrix is formed by the words of the sentences and is improved by adding a set of subtrees of Hypernymy and Hyponymy for each word. Rinaldi [17] introduced Semantic Relatedness Grade (SRG) for each pair of words in the document.…”
Section: Related Workmentioning
confidence: 99%
“…Some other summarisation methods employ ontologies to calculate similarity. According to the proposed method in Motta et al [16], a matrix is formed by the words of the sentences and is improved by adding a set of subtrees of Hypernymy and Hyponymy for each word. Rinaldi [17] introduced Semantic Relatedness Grade (SRG) for each pair of words in the document.…”
Section: Related Workmentioning
confidence: 99%
“…This way of doing has been recently apply to automatic summarization process (Motta et al, 2011). In the present paper, we propose to enrich these first results by adding an experiment we conducted with methods for optimizing attribute space, well-known but not enough used in this field.…”
Section: Introductionmentioning
confidence: 99%
“…In fact, this attribute space is a matrix, where the rows represent the sentences of the documents and the columns are the words of these sentences. Each item of the matrix corresponds to the frequency of the word in the sentence in a vector space model (Motta et al, 2011). By applying one of the five methods on this matrix, we obtained an optimized attribute space composed of sentences with important information.…”
Section: Introductionmentioning
confidence: 99%