2018
DOI: 10.1177/1748301818791507
|View full text |Cite
|
Sign up to set email alerts
|

A joint manifold leaning-based framework for heterogeneous upstream data fusion

Abstract: A joint manifold learning fusion (JMLF) approach is proposed for nonlinear or mixed sensor modalities with large streams of data. The multimodal sensor data are stacked to form joint manifolds, from which the embedded low intrinsic dimensionalities are discovered for moving targets. The intrinsic low dimensionalities are mapped to resolve the target locations. The JMLF framework is tested on digital imaging and remote sensing image generation scenes with mid-wave infrared (WMIR) data augmented with distributed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 37 publications
(9 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…The use of an autoencoder-based dynamic deep directional unit network [ 24 ] was capable of learning compact and abstract feature representations from high-dimensional spatiotemporal data of full motion video and I/Q data for the purposes of event behavior characterization. Other research into achieving EO/RF fusion for vehicle tracking and detection using Full Motion Video and P-RF includes joint manifold learning [ 25 ], a sheaf-based approach with its data [ 26 ], and SVM classifier [ 23 ]. In [ 25 , 26 ], simulation data were used as the primary method of training and testing, while in [ 23 ] real data were used.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…The use of an autoencoder-based dynamic deep directional unit network [ 24 ] was capable of learning compact and abstract feature representations from high-dimensional spatiotemporal data of full motion video and I/Q data for the purposes of event behavior characterization. Other research into achieving EO/RF fusion for vehicle tracking and detection using Full Motion Video and P-RF includes joint manifold learning [ 25 ], a sheaf-based approach with its data [ 26 ], and SVM classifier [ 23 ]. In [ 25 , 26 ], simulation data were used as the primary method of training and testing, while in [ 23 ] real data were used.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Other research into achieving EO/RF fusion for vehicle tracking and detection using Full Motion Video and P-RF includes joint manifold learning [ 25 ], a sheaf-based approach with its data [ 26 ], and SVM classifier [ 23 ]. In [ 25 , 26 ], simulation data were used as the primary method of training and testing, while in [ 23 ] real data were used. In [ 25 ], a joint manifold learning fusion approach was used for mixed simulation data using DIRSIG-generated data.…”
Section: Literature Reviewmentioning
confidence: 99%
“…G zn . Any number of distance-graph-based upstream fusion techniques (e.g, similarity network fusion [25], [23] or joint manifold learning [7], [21]) could then be used to produce a fused weighed graph G f . The final step of LESS could then be applied to produce the fused event sequence.…”
Section: Towards Less As a Fusion Techniquementioning
confidence: 99%
“…7 Combing EO with radar signatures allows for machine processing of multiresolution data with sparsity and complexity. 8 These data fusion methods afford interpretability of data for task success. Examples include interpretability over compressed imagery data, 9 3D volumetric lidar data, 10 and classifier assessment.…”
Section: Introductionmentioning
confidence: 99%