2016
DOI: 10.1007/978-3-319-45550-1_2
|View full text |Cite
|
Sign up to set email alerts
|

OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

Abstract: Abstract. We investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graphLaplacian based methods … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
6
1
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
2

Relationship

4
4

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 13 publications
(9 reference statements)
1
6
1
1
Order By: Relevance
“…• These methods have been introduced in earlier conference papers [61,42] as serial implementations and as a parallel implementation [56] on a supercomputer, without online code. Here we develop new parallel versions for real time implementations on the IPOL server for both hyperspectral imagery and the non local means functional for segmentation of RGB images.…”
Section: Main Contributionssupporting
confidence: 38%
See 1 more Smart Citation
“…• These methods have been introduced in earlier conference papers [61,42] as serial implementations and as a parallel implementation [56] on a supercomputer, without online code. Here we develop new parallel versions for real time implementations on the IPOL server for both hyperspectral imagery and the non local means functional for segmentation of RGB images.…”
Section: Main Contributionssupporting
confidence: 38%
“…This motivated us to develop parallel implementations and optimizations of these two new algorithms [56]. In particular, for computations, we use an optimized implementation of the Nyström extension eigensolver on high performance computing systems.…”
Section: Introductioncontrasting
confidence: 39%
“…explored in our future work. For high performance computing applications, the Nyström loop can be optimized for specific architectures as in [57].…”
Section: Discussionmentioning
confidence: 99%
“…Chapter 4 presents the work of one publication [107] which contains the full development of the hyperspectral image classification using two graph methods along with the parallelized codes and online demo. Chapter 5 discusses the publication [106] of detailed development of the parallelized algorithms using different high performance computing techniques.…”
Section: Introductionmentioning
confidence: 99%
“…Many successful machine learning methods have not been accelerated by high performance computing. This is a big opportunity and motivates us to develop parallel implementations and optimizations of the two classification algorithms [106] described in Chapter 3. We describe parallel implementations and optimizations of the new algorithms in Chapter 5.…”
Section: Introductionmentioning
confidence: 99%