2019
DOI: 10.1109/lsens.2018.2886544
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Data Fusion-Moving From Domain-Specific Algorithms to Transdomain Understanding for Accelerated Solution Development

Abstract: The exploding availability of new data streams now enables insights to be garnered through the integration (fusion) of multiple data sources (modalities); however, currently, it remains difficult to predict a priori which multimodal data fusion (MMDF) methods and architectures will be best suited for a novel application, leading to trial-and-error approaches that are inefficient in both time and cost. Although MMDF strategies are being applied ad hoc in many different fields (e.g., healthcare, autonomous navig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…Advances in the last decade contribute to the ongoing shift toward integration of multimodal data analysis, a strategy that mimics the way in which humans learn by integrating multiple data types. 15 This review is organized into 2 parts. Part 1 describes AI built on specific data modalities, highlighting their insights in LC.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Advances in the last decade contribute to the ongoing shift toward integration of multimodal data analysis, a strategy that mimics the way in which humans learn by integrating multiple data types. 15 This review is organized into 2 parts. Part 1 describes AI built on specific data modalities, highlighting their insights in LC.…”
Section: Discussionmentioning
confidence: 99%
“…Before 2010, most ML models were built only to process a single data modality. Advances in the last decade contribute to the ongoing shift toward integration of multimodal data analysis, a strategy that mimics the way in which humans learn by integrating multiple data types 15 …”
Section: Discussionmentioning
confidence: 99%
“…dependency at the lowest level of features (or raw input unprocessed data), (ii) intermediate-fusion assumes a dependency at a more abstract, semantic level; and (iii) decision-based fusion assumes no dependency at all in the input, but only later at the level of decisions. The above described assumption has the following implications, as argued in [47]: (i) there are no established, standard methods to identify feature dependencies in multiple sensors and modalities; (ii) the technology exists, but there are no standard methods to extract unbiased feature from raw data, and therefore deep learning methods are preferred; (iii) there are basic techniques to handle modality fusion when dealing with missing information; (iv) it is unclear what are the relevant features to be learned, in the sense that a trial-and-error process of feature engineering is employed for the shallow ML algorithms, i.e. Decision Tree, SVM, kNN; and (v) multimodal data fusion best practices i.e.…”
Section: Co-learningmentioning
confidence: 91%