We consider a basic caching system, where a single server with a database of N files (e.g. movies) is connected to a set of K users through a shared bottleneck link. Each user has a local cache memory with a size of M files. The system operates in two phases: a placement phase, where each cache memory is populated up to its size from the database, and a following delivery phase, where each user requests a file from the database, and the server is responsible for delivering the requested contents. The objective is to design the two phases to minimize the load (peak or average) of the bottleneck link. We characterize the rate-memory tradeoff of the above caching system within a factor of 2.00884 for both the peak rate and the average rate (under uniform file popularity), where the best proved characterization in the current literature gives a factor of 4 and 4.7 respectively. Moreover, in the practically important case where the number of files (N ) is large, we exactly characterize the tradeoff for systems with no more than 5 users, and characterize the tradeoff within a factor of 2 otherwise. We establish these results by developing novel information theoretic outer-bounds for the caching problem, which improves the state of the art and gives tight characterization in various cases.
I. INTRODUCTIONCaching is a common strategy to mitigate heavy peak-time communication load in a distributed network, via duplicating parts of the content in memories distributed across the network during off-peak times. In other words, caching allows us to trade distributed memory in the network for communication load reduction. Characterizing this fundamental rate-memory tradeoff is of great practical interest, and has been a research subject for several decades. For single-cache networks, the ratememory tradeoff has been characterized for various scenarios in 80th [1]. However, those techniques were found insufficient to tackle the multiple-cache cases. There has been a surge of recent results in information theory that aim at formalizing and characterizing such rate-memory tradeoff in cache networks [2]- [13]. In particular, the peak rate vs. memory tradeoff was formulated and characterized within a factor of 12 in a basic cache network with a shared bottleneck link [2]. This result has been extended to many scenarios, including decentralized caching [3] Essentially, many of these extensions share similar ideas in terms of the achievability and the converse bounds. Therefore, if we can improve the results for the basic bottleneck caching network, the ideas can be used to improve the results in other cases as well.In the literature, various approaches have been proposed for improving the bounds on rate-memory tradeoff for the bottleneck network. Several caching schemes have been proposed in [14]-[21], and converse bounds have also been introduced in [9], [22]-[26]. For the case, where the prefetching is uncoded, the exact rate-memory tradeoff for both peak and average rate (under uniform file popularity) and for both centra...