Hippocampal sharp wave ripples (SPW-R) have been identified as key bio-markers of important brain functions such as memory consolidation and decision making. SPW-R detection typically relies on hand-crafted feature extraction, and laborious manual curation is often required. In this multidisciplinary study, we propose a novel, self-improving artificial intelligence (AI) method in the form of deep Recurrent Neural Networks (RNN) with Long Short-Term memory (LSTM) layers that can learn features of SPW-R events from raw, labeled input data. The algorithm is trained using supervised learning on hand-curated data sets with SPW-R events. The input to the algorithm is the local field potential (LFP), the lowfrequency part of extracellularly recorded electric potentials from the CA1 region of the hippocampus. The output prediction can be interpreted as the time-varying probability of SPW-R events for the duration of the input. A simple thresholding applied to the output probabilities is found to identify times of events with high precision. The reference implementation of the algorithm, named 'RippleNet', is open source, freely available, and implemented using a common open-source framework for neural networks (tensorflow.keras) and can be easily incorporated into existing data analysis workflows for processing experimental data.Keywords: Machine learning, deep learning, recurrent neural networks, neuroscience, sharp wave ripples (SPW-R), Hippocampus CA1, local field potential (LFP). Different existing and novel real time algorithms for SPW-R detection were reviewed and tested on synthesized datas by Sethi and Kemere 2014. Such methods are applied with band-pass filtered LFP data, commonly in the 150 − 250 Hz range, and may incorporate adaptive thresholding (Fritsch, Ibanez, and Parrilla 1999;Jadhav et al. 2012).
Deep learning:Recent years have seen a surge in different supervised and unsupervised learning algorithms, propelled by hardware acceleration, better training datasets and the advent of deep convolutional neural networks (CNN) in image classification and segmentation tasks (see e.g., Le-Cun, Bengio, and Hinton 2015;Rawat and Wang 2017). Deep CNNs are, however, not yet as commonplace for time series classification tasks . Unlike traditional neural networks (NNs) and CNNs which typically employ a feed-forward hierarchical propagation of activation across layers, recurrent neural networks (RNN) have feedback connections, and is suitable for sequential data such as speech and written text. One architecture of RNNs is Long Short-Term Memory (LSTM) RNNs (Hochreiter and Schmidhuber 1997), capable of classifying, processing and predicting events in time-series data, even in the presence of lags of unknown duration.