2015
DOI: 10.1007/s11263-015-0831-z
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Dictionaries for Multiple Instance Learning

Abstract: We present a multi-class multiple instance learning (MIL) algorithm using the dictionary learning framework where the data is given in the form of bags. Each bag contains multiple samples, called instances, out of which at least one belongs to the class of the bag. We propose a noisy-OR model and a generalized mean-based optimization framework for learning the dictionaries in the feature space. The proposed method can be viewed as a generalized dictionary learning algorithm since it reduces to a novel discrimi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(22 citation statements)
references
References 44 publications
0
22
0
Order By: Relevance
“…Wei et al proposed scalable MIL solutions for large datasets using two new mappings for representation of bags: one based on locally aggregated descriptors called miVLAD and the other using Fisher vector representation called miFV [21]. Other popular solutions include Multiple-Instance Learning via Embedded Instance Selection (MILES) [22], deterministic annealing for MIL [23], semi-supervised SVMs for MIL (MissSVM) [24], generalized dictionaries for MIL [25], MIL with manifold bags [26], MIL with randomized trees [27]. Apart from these, many neural network based solutions had also been proposed for Multiple Instance Learning [28]- [30].…”
Section: Figure 1-illustration Of Concept Of Bags a Bag Is Labeled Pmentioning
confidence: 99%
“…Wei et al proposed scalable MIL solutions for large datasets using two new mappings for representation of bags: one based on locally aggregated descriptors called miVLAD and the other using Fisher vector representation called miFV [21]. Other popular solutions include Multiple-Instance Learning via Embedded Instance Selection (MILES) [22], deterministic annealing for MIL [23], semi-supervised SVMs for MIL (MissSVM) [24], generalized dictionaries for MIL [25], MIL with manifold bags [26], MIL with randomized trees [27]. Apart from these, many neural network based solutions had also been proposed for Multiple Instance Learning [28]- [30].…”
Section: Figure 1-illustration Of Concept Of Bags a Bag Is Labeled Pmentioning
confidence: 99%
“…The parameter settings of MI-HE for this experiment are T = 1, M = 9, ρ = 0.8, b = 5, β = 5 and λ = 1 × 10 −3 . MI-HE was compared to state-of-the-art MIL algorithms eFUMI [25], [17], MI-SMF and MI-ACE [19], DMIL [53], [54], EM-DD [24] and mi-SVM [21]. The mi-SVM algorithm was added to these experiments to include a comparison MIL approach that does not rely on estimating a target signature.…”
Section: A Simulated Datamentioning
confidence: 99%
“…Since the proposed model aims to emphasize the most true positive instance from each positive bag and assumes the "soft maximum" operation for this generalized mean model, it is expected that the model will work well with b greater than 1. The setting of this b value was discussed in [35], [53], where b was set to 1.5 [35] and 10 [53] and observed to work well. Fig.…”
Section: Tree Species Classification From Neon Datamentioning
confidence: 99%
“…However, among supervised dictionary learning methods, there are only a few approaches that address the problem given inaccurate MIL labels. These include MMDL [12] that trains many linear SVM classifiers and views the estimated parameters as dictionary atoms and DMIL [13], [14] that learns class-specific (4) dictionaries by maximizing the noisy-OR model in such a way that the all negative instances are poorly represented by the estimated target dictionary. As outlined in Sec I, DL-FUMI is unique from these existing methods through the use of a shared background dictionary.…”
Section: B Supervised Dictionary Learningmentioning
confidence: 99%
“…The USPS data set contains 9298 images of handwritten digits from 0 to 9. Each image [14]. Specifically, for each class c, 50 positive training bags were generated.…”
Section: B Usps Digit Classificationmentioning
confidence: 99%