2010
DOI: 10.1007/978-3-642-12116-6_27
|View full text |Cite
|
Sign up to set email alerts
|

Towards Automatic Detection and Tracking of Topic Change

Abstract: Abstract. We present an approach for automatic detection of topic change. Our approach is based on the analysis of statistical features of topics in time-sliced corpora and their dynamics over time. Processing large amounts of time-annotated news text, we identify new facets regarding a stream of topics consisting of latest news of public interest. Adaptable as an addition to the well known task of topic detection and tracking we aim to boil down a daily news stream to its novelty. For that we examine the cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0
1

Year Published

2010
2010
2021
2021

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 8 publications
0
16
0
1
Order By: Relevance
“…The basic metrics of the complex network are introduced in [1]. In [5], the keywords and the co-occurrence of them are considered together, but it does not introduce the complex network. Yang [15] tries to model the event evolution relationships between events in an incident.…”
Section: Related Workmentioning
confidence: 99%
“…The basic metrics of the complex network are introduced in [1]. In [5], the keywords and the co-occurrence of them are considered together, but it does not introduce the complex network. Yang [15] tries to model the event evolution relationships between events in an incident.…”
Section: Related Workmentioning
confidence: 99%
“…A number of researchers have applied natural language processing (NLP) techniques to detect features in small chunks of text (Nadeau and Sekine 2007;Holz and Teresniak 2010;Missen et al 2012). Hatzivassiloglou and McKeown (1997) use textual conjunctions to separate words with similar or opposite sentiment.…”
Section: Syntax and Sentiment Aggregationmentioning
confidence: 99%
“…Um die für die Volatilitätsberechnung notwendigen globalen Kontexte zu ermitteln, wurden alle statistisch signifikanten Kookkurrenzen für alle knapp 7.500 Zeitscheiben berechnet, was zu annähernd 30 Milliarden gewichteten Wortpaaren führte. Details der Berechnung stehen nicht im Fokus dieses Artikels und können in [8] nachgelesen werden. Für Stoppwörter und seltene Wörter wurden keine Kookkurrenzdaten ermittelt.…”
Section: Verwendete Datenquelleunclassified