2012
DOI: 10.1016/j.ipl.2012.08.017
|View full text |Cite
|
Sign up to set email alerts
|

On multiple-instance learning of halfspaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2012
2012
2014
2014

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 9 publications
1
3
0
Order By: Relevance
“…Theorem 1 is in line with previous complexity results for MI classification via hyperplanes (Kundakcioglu et al 2010;Diochnos et al 2012). For clarity, we include the theorem below, expressed in terms of our formalism:…”
Section: Can All Three Properties Be Satisfied?supporting
confidence: 75%
See 1 more Smart Citation
“…Theorem 1 is in line with previous complexity results for MI classification via hyperplanes (Kundakcioglu et al 2010;Diochnos et al 2012). For clarity, we include the theorem below, expressed in terms of our formalism:…”
Section: Can All Three Properties Be Satisfied?supporting
confidence: 75%
“…The proof of Theorem 2 (Diochnos et al 2012) reduces a 3-SAT instance to an instance of MI-CONSIS such that there is a strongly consistent hyperplane if the 3-SAT formula is satisfiable and no consistent (in the usual sense) hyperplane if the formula is not satisfiable. Thus, the proof works for either notion of consistency, though the distinction is not made in the original work.…”
Section: Can All Three Properties Be Satisfied?mentioning
confidence: 99%
“…From the perspective of computational complexity, Auer et al reduce learning DNF formulae to learning APRs from MI data (1998), showing that efficiently PAC-learning these MI instance concepts is impossible (unless NP = RP) when arbitrary distributions over r-tuples are allowed. Similarly, finding classifying hyperplanes for MI data has been shown to be NP-complete (Diochnos, Sloan, and Turán 2012;Kundakcioglu, Seref, and Pardalos 2010). However, all of these reductions rely on generating bags such that certain negative instances only appear in positive bags.…”
Section: Relation To Previous Workmentioning
confidence: 99%
“…It follows from recent results of Sabato and Tishby [20] that an algorithm for "Minimum One-sided Disagreement" can be transformed (without much loss of efficiency) into an algorithm that is successful in the hard version of the MIL model. The learning problem for the hard version of the MIL model is known to be NP-hard for axis-parallel hyper-rectangles of variable dimension (according to a result in [5]) 1 and for Euclidean halfspaces of variable dimension (according to results in [10]. On the positive side, our algorithms for the classes "Unions of k Intervals", "Axis-parallel Rectangles", T 2,k and TREE(2, n, 2, k) can be used as subroutines to solve the hard version of the corresponding MIL problem.…”
Section: Learning From Multiple-instance Examples (Mil)mentioning
confidence: 99%