Fog Computing (FC) and Conditional Deep Neural Networks (CDDNs) with early exits are two emerging paradigms which, up to now, are evolving in a standing-alone fashion. However, their integration is expected to be valuable in IoT applications in which resource-poor devices must mine large volume of sensed data in real-time. Motivated by this consideration, this paper focuses on the optimized design and performance validation of Learning-in-the-Fog (LiFo), a novel virtualized technological platform for the minimum-energy and delay-constrained execution of the inference-phase of CDDNs with early exits atop multi-tier networked computing infrastructures composed by multiple hierarchically-organized wireless Fog nodes. The main research contributions of this paper are threefold, namely: (i) we design the main building blocks and supporting services of the LiFo architecture by explicitly accounting for the multiple constraints on the per-exit maximum inference delays of the supported CDNN; (ii) we develop an adaptive algorithm for the minimum-energy distributed joint allocation and reconfiguration of the available computing-plus-networking resources of the LiFo platform. Interestingly enough, the designed algorithm is capable to self-detect (typically, unpredictable) environmental changes and quickly self-react them by properly re-configuring the available computing and networking resources; and, (iii) we design the main building blocks and related virtualized functionalities of an Information Centric-based networking architecture, which enables the LiFo platform to perform the aggregation of spatially-distributed IoT sensed data. The energy-vs.-inference delay LiFo performance is numerically tested under a number of IoT scenarios and compared against the corresponding ones of some state-of-the-art benchmark solutions that do not rely on the Fog support.