2021
DOI: 10.1016/j.neunet.2021.03.002
|View full text |Cite
|
Sign up to set email alerts
|

Parallel orthogonal deep neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
6
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Commonly, PSO is to observe the ideal outcome of the population with the assistance of particles' individual value, fitness value, and global best value. Parallel computing is used to achieve the desired accuracy and is used to accelerate neural network training for large training data sets [17,18,19,20]. In [17], a parallelization strategy for convolutional neural network (CNN) training is proposed based on two major techniques to maximize the overlap.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Commonly, PSO is to observe the ideal outcome of the population with the assistance of particles' individual value, fitness value, and global best value. Parallel computing is used to achieve the desired accuracy and is used to accelerate neural network training for large training data sets [17,18,19,20]. In [17], a parallelization strategy for convolutional neural network (CNN) training is proposed based on two major techniques to maximize the overlap.…”
Section: Introductionmentioning
confidence: 99%
“…In [19], a parallel algorithm called Split and Conquer for solving Verification of Neural Network (VNN) formulas, is proposed using the Reluplex procedure and an iterative-deepening strategy. In [20], a parallel deep neural network architecture with an embedded organization mechanism is proposed, which enforces diversity among the deep neural networks used as base models. The Parallel Particle Swarm Optimization (PPSO) algorithm is an example of a parallel computing process to minimize the processing time of high computation [21,22,23].…”
Section: Introductionmentioning
confidence: 99%
“…Benchmark datasets . The benchmark data sets used in this study include the Davis, [ 43 ] Metz, [ 44 ] KIBA, [ 45 ] and PDBbind [ 46 ] data sets. The binding affinities of the four data sets were measured in dissociation constant ( K d ), inhibition constant (K i ), KIBA metric, and dissociation constant ( K d ).…”
Section: Methodsmentioning
confidence: 99%
“…It was noted that the calculation of the KIBA metric is shown in Equation 1. [ 45 ] Therefore, when calculating the generalization, the results of the Pearson correlation coefficient ( Rp ) and the Spearman correlation coefficient ( Rs) tested between KIBA data set and other data sets were taken as the opposite number. The formula for the conversion of K d and binding free energy (normalΔGbind$\Delta {G}_{{\mathrm{bind}}}$) is shown in.…”
Section: Methodsmentioning
confidence: 99%
“…r) i by varing α. In(Larrazabal et al 2021;Ayinde, Inanc, and Zurada 2019;Mashhadi, Nowaczyk, and Pashami 2021), inducing orthogonality (i.e., forcing cosine similarity to zero) between parameters prevents a model learning redundant features given learning capacity. In this context, we expect the local model of each client wl(r) i to learn client-specific knowledge that cannot be captured by training of the global model w g(r) i in PHASE I.…”
mentioning
confidence: 99%