Handbook of Graphs and Networks 2002
DOI: 10.1002/3527602755.ch9
|View full text |Cite
|
Sign up to set email alerts
|

Theory of interacting neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2004
2004
2023
2023

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…Neural cryptography Kinzel 2002, Kinzel 2002) is based on the effect that two neural networks are able to synchronize by mutual learning (Ruttor et al 2006). In each step of this online procedure they receive a common input pattern and calculate their output.…”
Section: Neural Cryptographymentioning
confidence: 99%
See 1 more Smart Citation
“…Neural cryptography Kinzel 2002, Kinzel 2002) is based on the effect that two neural networks are able to synchronize by mutual learning (Ruttor et al 2006). In each step of this online procedure they receive a common input pattern and calculate their output.…”
Section: Neural Cryptographymentioning
confidence: 99%
“…Two partners A and B want to exchange a secret message over a public channel. In order to protect the content against an attacker T, who is listening to the communication, A encrypts the message, but B needs A's secret key over the public channel (Kinzel 2002). This can be achieved by synchronizing two TPMs (Three Parity Machines), one for A and one for B, respectively.…”
Section: Neural Cryptographymentioning
confidence: 99%
“…Two types of neural networks have been studied in the context of the MG [17,18,19]. Beyond the mere academic question of how well or badly they can perform, it is worth noting that these papers were interested for the first time in interacting neural networks.…”
Section: Neural Networkmentioning
confidence: 99%
“…Refs [17,18] introduced simple perceptrons playing the minority game. Each perceptron i = 1, · · · , N is made up of M weights w i = (w 1 1 , · · · , w M 1 ) which are drawn at random before the game begins.…”
Section: Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation