2010
DOI: 10.1007/s11042-010-0560-9
|View full text |Cite
|
Sign up to set email alerts
|

A comprehensive study of visual event computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 93 publications
0
10
0
Order By: Relevance
“…As future work, we would like to develop a reasoning and decision-making unit that exploits the global knowledge about objects to detect events [16]. If we can model the different situations to detect, this module could trigger alarms and select which information is offered to the surveillance staff, in order to avoid them being overwhelmed with the vast amount of events that can happen at the same time.…”
Section: Discussionmentioning
confidence: 99%
“…As future work, we would like to develop a reasoning and decision-making unit that exploits the global knowledge about objects to detect events [16]. If we can model the different situations to detect, this module could trigger alarms and select which information is offered to the surveillance staff, in order to avoid them being overwhelmed with the vast amount of events that can happen at the same time.…”
Section: Discussionmentioning
confidence: 99%
“…Event is defined as a semantic unit which bridges the gap between semantic world and cyberspace. [19] An event has the basic components such as who(object), when(time stamp), where(site), what(description), and why(reasoning). As a fundamental structure, discrete events could be stored in computers as logs for the purposes of analysis and archiving.…”
Section: Fig 2 An Example Of a Perceptual Human Face In Dotsmentioning
confidence: 99%
“…Albeit we do not exactly affirm the colors of the photo, we still could map the colors of this scene of today to the gray scale image using color transferring technologies based on texture synthesis. [1] [3] Video analogy was derived from image analogy [19][4]. Assume we have two similar videos at hand, we therefore create a relationship and bridge the gap between two videos.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They can be classified in two groups: event clustering approaches [5][6][7][8][9][10][11] and event hybrid approaches [12,14,17,21,27,28]. Extracting events from multimedia in terms of photographs or images is much more difficult when compared to text for essentially two reasons: i) Event detection from images requires aggregation of heterogeneous metadata [29]; ii) Linking multimedia data to event model aspects is far more challenging then textual data [30]. In fact, many aspects of an event should be taken into consideration, as described in the multimedia event model presented in [13], such as time, space, actors, granularities, sub-events, etc.…”
Section: B Event Detection From Multimediamentioning
confidence: 99%