Neuronal networks are interesting physical systems in various respects: they operate outside thermodynamic equilibrium [1], a consequence of directed synaptic connections that prohibit detailed balance [2]; they show relaxational dynamics and hence do not conserve but rather constantly dissipate energy; and they show collective behavior that self-organizes as a result of exposure to structured, correlated inputs and the interaction among their constituents. But their analysis is complicated by three fundamental properties: Neuronal activity is stochastic, the input-output transfer function of single neurons is non-linear, and networks show massive recurrence [3] that gives rise to strong interaction effects. They hence bear similarity with systems that are investigated in the field of (quantum) many particle systems. Here, as well, (quantum) fluctuations need to be taken into account and the challenge is to understand collective phenomena that arise from the non-linear interaction of their constituents. Not surprisingly, similar methods can in principle be used to study these two a priori distinct system classes [4][5][6][7][8].But so far, the techniques employed within theoretical neuroscience just begin to harvest this potential. Here we will take essential steps towards this goal. Concretely, we adapt methods from statistical field theory and functional renormalization group techniques to the study of neuronal dynamics.A central motivation for this work is a coherent presentation of the technical machinery, which is well-developed in other fields of physics [9], to study the statistics and in particular phase transitions in stochastic neuronal systems and to provide a bridge between the stochastic dynamics and effective descriptions with reduced complexity.The large number of synaptic inputs to each neuron in a network allows the application of mean-field theory [10-12] to explain many dynamical phenomena, among them first order phase transitions. The transition from a quiescent to a highly active state in a bistable neuronal network is a prime example of a first order phase transition in neuronal networks [12]. The activation of attractors embedded into the connectivity of a Hopfield network is a second [13]. Combined with linear response theory, network fluctuations can quantitatively be described in binary [14][15][16] and in spiking networks [17][18][19][20][21][22]. Also transitions into oscillatory states by Andronov-Hopf bifurcations are in the realm of linear response theory around a mean-field solution [23][24][25][26].Second order phase transitions in neuronal networks are more challenging because the behavior of the system is dominated by fluctuations on all length scales, so that mean-field theory and its systematic correction by loopwise expansion break down [4]. But understanding these transitions is highly interesting from a neuroscientific point of view, because networks then show large susceptibility to signals. Moreover, signatures of critical states are found ubiquitously in experiments: Par...