Most basic models for the power (or equivalently, the neutron population) in a nuclear core consider the power as a function of time (with an energetic and spatial distribution) and lead to deterministic description of the reactor kinetics. While these models are of common use and are undoubtedly the main analytic tool in understanding the reactor kinetics, the true nature of the power in a reactor core is stochastic and should be considered as a stochastic process in time. The stochastic fluctuations of the power around the mean field (which is given by the deterministic models) are referred to as "reactor noise", and understanding them is a basic topic in nuclear science and engineering. Traditionally, most models for reactor noise consider a sub-critical core, reaching steady state after exposure to an external source. The focus on a sub-critical setting is driven by two main factors. First, from a practical point of view, measuring the power fluctuations in a sub-critical core (known as "noise experiments") has proven to be a very efficient tool for estimating the static and kinetic parameters of the core. Second, once we assume a critical setting, the current models become statistically unstable, while the mean field solution has a stationary solution, the variance tends to ∞ linearly in time. The instability of the stochastic models is a known problem, and it has been conjectured in the past that this (some what strange) increase in the variance-that is not observed in physical systems-can be restrained by power feedback. However, this conjecture was never proven. The outline of the present study is to present a stochastic analysis to the point reactor kinetics model, proving that once the reactivity has a negative feedback, it not only forces a specific steady-state solution (in terms of the mean field equation), but also prevents the variance to "explode", and the variance is bounded in time.