2020
DOI: 10.1109/tnnls.2019.2910555
|View full text |Cite
|
Sign up to set email alerts
|

Nonsynaptic Error Backpropagation in Long-Term Cognitive Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…In this model, each neural concept C i denotes a domain variable while weights w ji ∈ R represent the rate of change in the conditional mean of C i with respect to C j , assuming that the activation values of the remaining neurons impacting C i are fixed [4]. Hidden neurons are not allowed, as they cannot be interpreted naturally.…”
Section: Network Constructionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this model, each neural concept C i denotes a domain variable while weights w ji ∈ R represent the rate of change in the conditional mean of C i with respect to C j , assuming that the activation values of the remaining neurons impacting C i are fixed [4]. Hidden neurons are not allowed, as they cannot be interpreted naturally.…”
Section: Network Constructionmentioning
confidence: 99%
“…i , whereas the weights w ij and bounds L i and U i remain unaltered [4]. Overall, the learning task consists of adjusting the shape of the transfer function associated with the i-th neuron in each iteration.…”
Section: Nonsynaptic Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The LTCN model was introduced in [14] to overcome these issues. In short, LTCNs are neither causal nor fuzzy, and their weights can take values in the real domain.…”
Section: Long-term Cognitive Networkmentioning
confidence: 99%
“…To tackle the last two issues, Nápoles et al [14] introduced the Long-term Cognitive Networks (LTCNs). In this FCM-like model, the weights are not constrained to any specific interval, and the learnable parameters are computed using a non-synaptic backpropagation algorithm.…”
Section: Introductionmentioning
confidence: 99%