2021
DOI: 10.48550/arxiv.2106.02749
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predify: Augmenting deep neural networks with brain-inspired predictive coding dynamics

Bhavin Choksi,
Milad Mozafari,
Callum Biggs O'May
et al.

Abstract: Deep neural networks excel at image classification, but their performance is far less robust to input perturbations than human perception. In this work we explore whether this shortcoming may be partly addressed by incorporating brain-inspired recurrent dynamics in deep convolutional networks. We take inspiration from a popular framework in neuroscience: "predictive coding". At each layer of the hierarchical model, generative feedback "predicts" (i.e., reconstructs) the pattern of activity in the previous laye… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…At run time, this reconstructed activity is then added to the true activity at the first layer. While this form of predictive processing differs from the more common implementation of predictive coding (Rao & Ballard, 1999) in that it does not propagate error signals, it has been shown to increase performance on classification of images with pixel noise (Choksi et al, 2020(Choksi et al, , 2021. For the lateral connections, we implement within-feature spatial surround suppression and nearby facilitation by applying a one channel convolutional filter defined by a difference of Gaussians (Figure 1F), inspired by Hasani et al (2019) (though note in their study the full model is trained with this filter present, which is not the case here).…”
Section: Different Ways Of Adding Recurrence All Increase Performance...mentioning
confidence: 99%
See 1 more Smart Citation
“…At run time, this reconstructed activity is then added to the true activity at the first layer. While this form of predictive processing differs from the more common implementation of predictive coding (Rao & Ballard, 1999) in that it does not propagate error signals, it has been shown to increase performance on classification of images with pixel noise (Choksi et al, 2020(Choksi et al, , 2021. For the lateral connections, we implement within-feature spatial surround suppression and nearby facilitation by applying a one channel convolutional filter defined by a difference of Gaussians (Figure 1F), inspired by Hasani et al (2019) (though note in their study the full model is trained with this filter present, which is not the case here).…”
Section: Different Ways Of Adding Recurrence All Increase Performance...mentioning
confidence: 99%
“…We try out different ways of adding both feedback and lateral connections. First, we use bio-inspired methods: we add lateral connections that implement surround suppression, which is found throughout the visual system (Angelucci et al, 2017;Hasani, Soleymani, & Aghajan, 2019); for feedback connections, we implement a modified predictive processing scheme (Choksi et al, 2020(Choksi et al, , 2021Mumford, 1992;Rao & Ballard, 1999). To add connections inspired mainly by the function we know recurrence to play, we also add lateral and feedback connections trained with backpropagation to classify noisy images.…”
Section: Introductionmentioning
confidence: 99%
“…these specifically hardwired microcircuits are precisely present in the cortex. The second major limitation is that most models are non-spiking networks which lack biological realism [14,15,16,10]. This has been mostly due to the lack of a straightforward way to transfer classical rate-based predictive coding to a spiking implementation, ie.…”
Section: Introductionmentioning
confidence: 99%
“…The difficulties in training spiking neural networks have also hindered efforts in this direction [5]. Additional trade-offs occur between biological fidelity and scalability, which makes it difficult to study more complex phenomena in a biological network [10,12,16,15]. The few works implementing predictive coding in spiking neural networks like in [12] leave a gap in the literature for more biologically realistic network models without specific architectural biases.…”
Section: Introductionmentioning
confidence: 99%
“…Some recent studies indicated that a renewed focus on the inspiration of the human nervous system for neural networks could be more helpful for the further improvement and development of artificial intelligence. For example, the deep network PredNet (Lotter et al, 2016 ) and the neural network robustness enhancement package Predify (Choksi et al, 2021 ) are inspired by the concept of predictive coding observed in neuroscience. In addition, the brain-inspired replay method effectively solves the problem of catastrophic forgetting of artificial neural networks (van de Ven et al, 2020 ).…”
Section: Introductionmentioning
confidence: 99%