Neuronal synapses transmit electrochemical signals between cells through the coordinated action of presynaptic vesicles, ion channels, scaffolding and adapter proteins, and membrane receptors. In situ structural characterization of numerous synaptic proteins simultaneously through multiplexed imaging facilitates a bottom-up approach to synapse classification and phenotypic description. Objective automation of efficient and reliable synapse detection within these datasets is essential for the high-throughput investigation of synaptic features. Convolutional neural networks can solve this generalized problem of synapse detection, however, these architectures require large numbers of training examples to optimize their thousands of parameters. We propose DoGNet, a neural network architecture that closes the gap between classical computer vision blob detectors, such as Difference of Gaussians (DoG) filters, and modern convolutional networks. DoGNet is optimized to analyze highly multiplexed microscopy data. Its small number of training parameters allows DoGNet to be trained with few examples, which facilitates its application to new datasets without overfitting. We evaluate the method on multiplexed fluorescence imaging data from both primary mouse neuronal cultures and mouse cortex tissue slices. We show that DoGNet outperforms convolutional networks with a low-to-moderate number of training examples, and DoGNet is efficiently transferred between datasets collected from separate research groups. DoGNet synapse localizations can then be used to guide the segmentation of individual synaptic protein locations and spatial extents, revealing their spatial organization and relative abundances within individual synapses. The source code is publicly available:
https://github.com/kulikovv/dognet
.