2013
DOI: 10.1007/978-3-642-40811-3_33
|View full text |Cite
|
Sign up to set email alerts
|

Robust Multimodal Dictionary Learning

Abstract: We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 10 publications
(16 reference statements)
0
14
0
Order By: Relevance
“…Some other approaches have used atlas templates for single-label [43] or for multi-label brain tumor segmentation [10,18,19]. All these above methods need a multi-channel input (multi-modality MRI data).…”
Section: Figurementioning
confidence: 99%
See 1 more Smart Citation
“…Some other approaches have used atlas templates for single-label [43] or for multi-label brain tumor segmentation [10,18,19]. All these above methods need a multi-channel input (multi-modality MRI data).…”
Section: Figurementioning
confidence: 99%
“…All these above methods need a multi-channel input (multi-modality MRI data). In recent years, some approaches used image patch dictionary learning for single-label tumor segmentation [39,44] or multi-modal coupled dictionary learning for microscopical image registration [10]. These methods used one dictionary for each class and the residual error to discriminate the tumor/non-tumor classes.…”
Section: Figurementioning
confidence: 99%
“…Generally, most of multi-view learning methods belong to the category of the graph-based method. Among them, one representative group of multi-view methods [15,31,32] aim to fuse multiple features into single representation, by exploiting the common latent space shared by all views. For example, multi-view sparse coding [31,32] combines the shared latent representation for the multi-view information by a series of linear maps as dictionaries.…”
Section: A Multi-view Learningmentioning
confidence: 99%
“…For multi-label brain tumor segmentation, several researchers used either random forest based on feature extraction [8,9,11] or Markov Random Field (MRF) [10,13]. Some other approaches used atlas template for single label [14] or multi-label brain tumor segmentation [4,6,7]. All the above methods require multi-channel input (multi-modality MRI data).…”
Section: Introductionmentioning
confidence: 99%
“…learning for microscopical image registration [7]. These methods used the residual error to discriminate the tumor/non-tumor classes and required one dictionary for each class.…”
Section: Introductionmentioning
confidence: 99%