An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as storage resources in the form of caches to reduce file delivery latency.To investigate this aspect, we study the fundamental limits of a cache-aided broadcast-relay wireless network consisting of one central base station, M cache-equipped transceivers and K receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. The objective is to design the schemes for cache placement and file delivery in order to minimize the NDT. To this end, we establish a novel converse (for arbitrary M and K) and two types of achievability schemes applicable to both time-variant and invariant channels. The first scheme is a general one-shot scheme for any M and K that synergistically exploits both multicasting (coded) caching and distributed zero-forcing opportunities. Apart from the obvious advantage of low signaling complexity, we show that the proposed one-shot scheme (i) attains gains attributed to both individual and collective transceiver caches (ii) is NDT-optimal for various parameter settings, particularly at higher cache sizes. The second scheme, on the other hand, designs beamformers to facilitate both subspace interference alignment and zero-forcing at lower cache sizes. Exploiting both schemes, we are able to characterize for various special cases of M and K which satisfy K + M ≤ 4 the optimal tradeoff between cache storage and latency. The tradeoff illustrates that the NDT is the preferred choice to capture the latency of a system rather than the commonly used sum degrees-of-freedom (DoF). In fact, our optimal tradeoff refutes the popular belief that increasing cache sizes translates to increasing the achievable sum DoF. As such, we identify and discuss cases where increasing cache sizes decreases both the delivery time and the achievable DoF. DRAFT 2
Index TermsCaching, interference alignment, degrees-of-freedom, latency, delivery time. DRAFT 4 with an arbitrary number of edge nodes and users. With these bounds, the optimality of schemes presented in [21] for certain regimes of cache sizes was shown under uncoded prefetching of the cached content. These concepts have been recently applied to Fog radio access networks (F-RAN) that consist of a centralized cloud server, cache-assisted edge nodes and mobile users.The NDT of F-RANs has been first fully characterized for two edge nodes and two mobile users [23]. Later on for the setting of arbitrary number of edge nodes and receivers a constant factor characterization of 2 has been established in [24]. The effect of channel strength and fading on the delivery time of partially connected F-RANs has been investigated in [25], [26], [27] on the basis of the binary fading model [28] and the linear deterministi...