2013
DOI: 10.1016/j.engappai.2013.05.005
|View full text |Cite
|
Sign up to set email alerts
|

I-struve: Automatic linguistic descriptions of visual double stars

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0
2

Year Published

2015
2015
2018
2018

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 20 publications
0
10
0
2
Order By: Relevance
“…Finally, there are some recent works like [22], where authors introduced a new approach to linguistic summarization of time series based on the use of a fuzzy hierarchical partition of the time dimension and the evaluation of quantified sentences. In previous works, we have generated assessing reports in truck driving simulators [23], reports about traffic evolution in roads [24], about the relevant features of the Mars' surface [25] and linguistic descriptions of visual double stars [26]. In addition, we have worked with accelerometer data by automatically generating linguistic reports about human gait quality [27], gesture recognition [28] and activity recognition [29,30].…”
Section: Introductionmentioning
confidence: 99%
“…Finally, there are some recent works like [22], where authors introduced a new approach to linguistic summarization of time series based on the use of a fuzzy hierarchical partition of the time dimension and the evaluation of quantified sentences. In previous works, we have generated assessing reports in truck driving simulators [23], reports about traffic evolution in roads [24], about the relevant features of the Mars' surface [25] and linguistic descriptions of visual double stars [26]. In addition, we have worked with accelerometer data by automatically generating linguistic reports about human gait quality [27], gesture recognition [28] and activity recognition [29,30].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, it has been shown that information displayed as text to the user is interpreted more swiftly compared to graphs [6]. Finally, a linguistic summary can be read out by a text-to-speech synthesis system when the visual attention must not be disturbed, while executing a complex task for instance [7], or when it is deficient.…”
Section: Introductionmentioning
confidence: 99%
“…For example: describing big data (Conde-Clemente et al, 2017b); advising how to save energy at home (CondeClemente et al, 2016); describing physical activity (Sanchez-Valdes et al, 2016); describing drivers' behavior in driving simulations (Eciolaza et al, 2013); or describing double stars in astronomy (Arguelles and Trivino, 2013). Figure 1 depicts the LDCP architecture for Natural Language Generation in Data-to-text applications (NLG/D2T).…”
Section: Introductionmentioning
confidence: 99%