2015
DOI: 10.1016/j.procs.2015.05.359
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Data-driven Application System (DDDAS) for Video Surveillance User Support

Abstract: Human-machine interaction mixed initiatives require a pragmatic coordination between different systems. Context understanding is established from the content, analysis, and guidance from querybased coordination between users and machines. Inspired by Level 5 Information Fusion 'user refinement', a live-video computing (LVC) structure is presented for user-based query access of a data-base management of information. Information access includes multimedia fusion of query-based text, images, and exploited tracks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(3 citation statements)
references
References 58 publications
0
3
0
Order By: Relevance
“…• Sensing: Observing the state of agent's environment and retrieving relevant information that may be disseminated by other agents • Information Sharing: Communicating agent's current state and observations with other agents • Data Fusion and Analytics: Integration and processing of observed and retrieved information • Self-Configuration: Configuration of agent's functional parameters according to processed information Figure 1 illustrates the anatomy of a DDDAS cycle. Since the inception of DDDAS, this framework has spawned numerous applications such as environment analysis (e.g., weather [25]); robotic systems (e.g., coordination and swarming of unmanned aerial vehicles (UAVs) [26] and unmanned ground vehicles (UGVs) [27]); image processing (e.g., target tracking [28]), and embedded computing (e.g., hardware/software designs [29]). Furthermore, recent literature illustrates the application of this framework to the analysis of generic complex systems [30].…”
Section: B Dddas Modelmentioning
confidence: 99%
“…• Sensing: Observing the state of agent's environment and retrieving relevant information that may be disseminated by other agents • Information Sharing: Communicating agent's current state and observations with other agents • Data Fusion and Analytics: Integration and processing of observed and retrieved information • Self-Configuration: Configuration of agent's functional parameters according to processed information Figure 1 illustrates the anatomy of a DDDAS cycle. Since the inception of DDDAS, this framework has spawned numerous applications such as environment analysis (e.g., weather [25]); robotic systems (e.g., coordination and swarming of unmanned aerial vehicles (UAVs) [26] and unmanned ground vehicles (UGVs) [27]); image processing (e.g., target tracking [28]), and embedded computing (e.g., hardware/software designs [29]). Furthermore, recent literature illustrates the application of this framework to the analysis of generic complex systems [30].…”
Section: B Dddas Modelmentioning
confidence: 99%
“…It is very challenging to immediately analyze the objects of interest or zoom in on suspicious actions from thousands of video frames. Making the big data indexable is critical to tackle the object analytics problem [1], [6]. It is ideal to generate pattern indexes in a real-time, on-site manner on the video streaming instead of depending on the batch processing at the cloud centers.…”
Section: Introductionmentioning
confidence: 99%
“…Content based image fusion [3,4] has been the subject of much research and is often used in the content based video retrieval (CBIR) algorithms. Other techniques such as machine vision and learning are considered in conjunction with image fusion during object classification and user interaction [5].…”
Section: Introductionmentioning
confidence: 99%