2024
DOI: 10.1016/j.inffus.2023.102040
|View full text |Cite
|
Sign up to set email alerts
|

A survey of multimodal information fusion for smart healthcare: Mapping the journey from data to wisdom

Thanveer Shaik,
Xiaohui Tao,
Lin Li
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(4 citation statements)
references
References 146 publications
0
4
0
Order By: Relevance
“…Privacy and ethical considerations are essential in MMLA, and recent works emphasize the importance of responsible data practices. The literature also addresses challenges related to data integration, interoperability, and the development of effective visualization techniques to make sense of multimodal data [21]. Thus, MMLA is at the forefront of educational research, offering a promising avenue for advancing LA.…”
Section: Multi-modal Learning Analytics (Mmla)mentioning
confidence: 99%
“…Privacy and ethical considerations are essential in MMLA, and recent works emphasize the importance of responsible data practices. The literature also addresses challenges related to data integration, interoperability, and the development of effective visualization techniques to make sense of multimodal data [21]. Thus, MMLA is at the forefront of educational research, offering a promising avenue for advancing LA.…”
Section: Multi-modal Learning Analytics (Mmla)mentioning
confidence: 99%
“…The integration of data from different modalities is on the rise in Artificial Intelligence (AI) fields, such as Machine Learning (ML) and Deep Learning (DL), known as multimodal data fusion. This research area, with applications integrating areas from natural language processing to computer vision and beyond, has driven applications in healthcare [4,5], integrating EHRs with medical images [6][7][8][9] or signals from wearable devices [10,11]. Multimodal fusion has also led to applications such as autonomous driving [12,13], environmental sciences applications combining different sensors and satellite data [14,15], as well as many other Internet of Things (IoT) applications and system improvements [16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…The increase of multimodal data, which integrates disparate data formats such as text, image, or audio, requires developing sophisticated computational techniques to process and integrate these heterogeneous data types [1][2][3]. This integration, known as multimodal data fusion, leverages mainly deep learning techniques [4,5] and is critical for building systems that can interpret complex data in a manner akin to human cognition, thereby enhancing decision-making processes in clinical applications [6][7][8][9][10][11][12][13]: with fundus photos [14], Chest X-rays [15], or even public health applications using remote sensing techniques [16][17][18][19][20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%