The brain is our softest and most vulnerable organ, and understanding its physics is a challenging but significant task. Massive efforts have been dedicated at testing the human brain, and various competing models have emerged to characterize its response to mechanical loading. However, selecting the best constitutive model remains a heuristic process that strongly depends on user experience and personal preference. Here we challenge the conventional wisdom to first select a constitutive model and then fit its parameters to experimental data. Instead, we propose a new strategy that simultaneously discovers both model and parameters that best describe the data. Towards this goal, we integrate more than a century of knowledge in thermodynamics and state-of-the-art machine learning to build a family of Constitutive Artificial Neural Networks that enable automated model discovery for human brain tissue. Our overall design paradigm is to reverse engineer a Constitutive Artificial Neural Network from a set of functional building blocks that are, by design, a generalization of widely used and commonly accepted constitutive models, including the neo Hooke, Blatz Ko, Mooney Rivlin, Demiray, Gent, and Holzapfel models. By constraining the input, output, activation functions, and architecture, our network a priori satisfies thermodynamic consistency, material objectivity, material symmetry, physical constrains, and polyconvexity. We demonstrate that our network autonomously discovers both model and parameters that best characterize the behavior of human gray and white matter under tension, compression, and shear. Importantly, our network weights translate naturally into physically meaningful material parameters, e.g., shear moduli of 1.82kPa, 0.88kPa, 0.94kPa, and 0.54kPa for the cortex, basal ganglia, corona radiata, and corpus callosum. Our results suggest that Constitutive Artificial Neural Networks have the potential to induce a paradigm shift in soft tissue modeling, from user-defined model selection to automated model discovery. Our source code, data, and examples are available at https://github.com/LivingMatterLab/CANN.