2018
DOI: 10.1063/1.5041638
|View full text |Cite
|
Sign up to set email alerts
|

Linear kernel Hopfield neural network approach in horn clause programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…Similarity indices, such as Jaccard's Index [49], Sokhal-Sneath2 Index [50], and Variation Index [50], can be employed to assess the similarity between the final states obtained by the model. In addition, we adopt Symmetric Mean Absolute Percentage Error (SMAPE) [51], Median Absolute Percentage Error [48], Fitness energy landscape [52], computation time [53], and specificity analysis [54]. assess the similarity between the final states obtained by the model.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarity indices, such as Jaccard's Index [49], Sokhal-Sneath2 Index [50], and Variation Index [50], can be employed to assess the similarity between the final states obtained by the model. In addition, we adopt Symmetric Mean Absolute Percentage Error (SMAPE) [51], Median Absolute Percentage Error [48], Fitness energy landscape [52], computation time [53], and specificity analysis [54]. assess the similarity between the final states obtained by the model.…”
Section: Resultsmentioning
confidence: 99%
“…A different approach can be employed for optimizing the retrieval phase of HNN-RAN2SATEA. Different types of Hopfield Neural Networks, such as Mutation Hopfield Neural Network [30], Mean Field Theory Hopfield Network [46], Boltzman Hopfield [47], and Kernel Hopfield Network [48], drive the local minimum solution to the global minimum solution in different ways. More performance metrics can be investigated to authenticate our results.…”
Section: Discussionmentioning
confidence: 99%
“…The neuron perturbation reduces the effect of the suboptimal synaptic weight during the learning phase. Other HNN models, such as the BHNN [35] and MFTHNN [40], reduced the number of local minima but failed to achieve optimal global minimum energy as the number of clauses increased. According to Figures 4-6, the conventional HNN has the highest error at NN = 60 because the network retrieved the suboptimal synaptic weight.…”
Section: Resultsmentioning
confidence: 99%
“…Therefore, the value of T was selected according to [35] for the BHNN and [39] for the MFTHNN. According to Table 4, the linear kernel is applied due to the good agreement with the logic programming problem as outlined in the work of [40]. Based on Table 6, the hyperbolic tangent (HTAF) was selected due to the differentiable nature of the function and the ability to establish the non-linear relationship among the neuron connections [42].…”
Section: Simulationmentioning
confidence: 99%
See 1 more Smart Citation