2014
DOI: 10.1007/978-3-319-12571-8_24
|View full text |Cite
|
Sign up to set email alerts
|

Widened KRIMP: Better Performance through Diverse Parallelism

Abstract: We demonstrate that the previously introduced Widening framework is applicable to state-of-the-art Machine Learning algorithms. Using Krimp, an itemset mining algorithm, we show that parallelizing the search finds better solutions in nearly the same time as the original, sequential/greedy algorithm. We also introduce Reverse Standard Candidate Order (RSCO) as a candidate ordering heuristic for Krimp. 1 Introduction Research into parallelism in Machine Learning has primarily focused on reducing the execution ti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
1

Relationship

4
1

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 19 publications
0
11
1
Order By: Relevance
“…The widening framework terms this Top-k-widening, i.e., M i+1 = s T op−k (r(M i )) : |M i+1 | = k. WIDENING begins to widen the search paths beyond a simple greedy mechanism when diversity is brought into play. The notion of diversity can be implemented in either the refining step as in [24,25] or in the selection step as in [11,12]. Given a diverse refinement operator, r ∆ (•), as in [24,25], where a diversity function, ∆, is imposed on the output, DIVERSE TOP-K WIDENING is described by…”
Section: Wideningmentioning
confidence: 99%
See 1 more Smart Citation
“…The widening framework terms this Top-k-widening, i.e., M i+1 = s T op−k (r(M i )) : |M i+1 | = k. WIDENING begins to widen the search paths beyond a simple greedy mechanism when diversity is brought into play. The notion of diversity can be implemented in either the refining step as in [24,25] or in the selection step as in [11,12]. Given a diverse refinement operator, r ∆ (•), as in [24,25], where a diversity function, ∆, is imposed on the output, DIVERSE TOP-K WIDENING is described by…”
Section: Wideningmentioning
confidence: 99%
“…Not Faster." Although the demonstrated examples, such as WIDENED KRIMP [24], WIDENED HIERARCHICAL CLUS-TERING [11], WIDENED BAYESIAN NETWORKS [25] and BUCKET SELECTION [12] have been able to find superior solutions, i.e., "better," they have been unable to demonstrate this ability in a run-time that is comparable to the standard versions of the greedy algorithms. "Not faster" is not intended to mean "slower.…”
Section: Introductionmentioning
confidence: 99%
“…p -dispersion-sum has the property of pushing the resultant subset to the margins of the original set, whereas the subset derived using p -dispersion-min-sum is more representative of the dataset as whole [24]. Because of this property, and based on the results in [29], we favor p-dispersion-min-sum as the diverse subset selection method. …”
Section: Diversitymentioning
confidence: 99%
“…This enables the system as a whole to avoid local optima and potentially find better models than the greedy learning algorithm would otherwise find. Previous work [13,29] has demonstrated its viability on real world algorithms. This work builds on that with an application to the superexponentially-sized [28] hypothesis space of learning Bayesian Networks.…”
Section: Introductionmentioning
confidence: 99%
“…Another approach that focuses on leveraging parallel computing resources to improve models generated by a data mining algorithm, rather than speeding up the computation, has been proposed in [1]. The technique has already been shown to work well for the set covering problem and KRIMP [17].…”
Section: Introductionmentioning
confidence: 99%