Big data has transformed fields such as physics and genomics. Neuroscience is set to collect its own big data sets, but to exploit its full potential, there need to be ways to standardize, integrate and synthesize diverse types of data from different levels of analysis and across species. This will require a cultural shift in sharing data across labs, as well as to a central role for theorists in neuroscience research.Big data, the buzz phrase of our time, has arrived on the neuroscientific scene, as it has already in physics, astronomy and genomics. It offers enlightenment and new depths of understanding, but it can also be a bane if it obscures, obstructs and overwhelms. The arrival of big data also marks a cultural transition in neuroscience, from many isolated 'vertical' efforts applying single techniques to single problems in single species to more 'horizontal' efforts that integrate data collected using a wide range of techniques, problems and species. We face five main issues in making big data work for us.First, data in neuroscience exist at an astonishing range of scales of both space and time. Neuroscientific data are obtained from a wide range of techniques, from patch clamping to optogenetics to fMRI (Fig. 1). Most of these techniques are used one at a time. One lab will record spikes from an array of neurons, but not be able to determine which types of neurons they are or how they are connected to other neurons. Another lab will reconstruct the wiring diagram of the same circuit, but without recording data to identify the properties of the reconstructed neurons. In some heroic cases, functional data have been laboriously combined with anatomical reconstructions 1 , but rarely if ever in a broad behavioral context.