prior to the advent of the cloud, storage and processing services were accommodated by specialized hardware, however, this approach introduced a number of challenges in terms of scalability, energy efficiency, and cost. Then came the concept of cloud computing, where to some extent, the issue of massive storage and computation was dealt with by centralized data centers that are accessed via the core network. The cloud has remained with us thus far, however, this has introduced further challenges among which, latency and energy efficiency are of the pinnacle. With the increase in embedded devices' intelligence came the concept of the Fog. The availability of massive numbers of storage and computational devices at the edge of the network, where some are owned and deployed by the end-users themselves but most by service operators. This means that cloud services are pushed further out from the core towards the edge of the network, hence reduced latency is achieved. Fog nodes are massively distributed in the network, some benefit from wired connections, and others are connected via wireless links. The question of where to allocate services remains an important task and requires extensive attention. This chapter introduces and evaluates cloud fog architectures in 6G networks paying special attention to latency, energy efficiency, scalability, and the trade-offs between distributed and centralized processing resources.