Abstract-Compute-and-Forward is an emerging technique to deal with interference. It allows the receiver to decode a suitably chosen integer linear combination of the transmitted messages. The integer coefficients should be adapted to the channel fading state. Optimizing these coefficients is a Shortest Lattice Vector (SLV) problem. In general, the SLV problem is known to be prohibitively complex. In this paper, we show that the particular SLV instance resulting from the Compute-andForward problem can be solved in low polynomial complexity and give an explicit deterministic algorithm that is guaranteed to find the optimal solution.
Abstract-Caching is an approach to smoothen the variability of traffic over time. Recently it has been proved that the local memories at the users can be exploited for reducing the peak traffic in a much more efficient way than previously believed. In this work we improve upon the existing results and introduce a novel caching strategy that takes advantage of simultaneous coded placement and coded delivery in order to decrease the worst case achievable rate with 2 files and K users. We will show that for any cache size 1 K < M < 1 our scheme outperforms the state of the art.Index Terms-Coded Caching, Content Delivery, Improved Achievable RateThe performance of content delivery services is highly dependent on the habits of the users and how well the servers model these habits and adapt their content distribution strategies to them. A basic observation of these habits is the temporal variability of the demands which in its simplest form can be formulated as high congestion during a particular time interval and low traffic for the rest of the day. One popular mechanism that the network can adapt to cope with this issue is caching: during the low traffic time interval, typically mornings, the servers store parts of the content in local memories of the users which may be helpful in the evenings, and hence reduce the peak traffic load. A notable challenge with this strategy is that typically the servers are not aware of which contents will be requested by the users in the peak time. Therefore, the caching of contents in local memories must be performed in such a way that regardless of what requests the users make, the contents are still helpful in reducing the traffic, as much as possible.Perhaps the simplest solution to this problem is to partially store every file at the local caches of the users and transfer the rest of the data uncoded according to the demands made in the delivery time. In their seminal works [1], [2] Maddah-Ali and Niesen have proved that by using network coding techniques this simple strategy can be significantly outperformed if one allows coding across different files on the server and jointly optimizes the caching and the delivery strategies.Despite its impressive potentials, the caching strategy introduced in [1] is known to perform poorly when the cache size is small, and in particular when the number of users is much larger than the number of files, K ≫ N . The applicability of this paradigm in real world scenarios is manifold. A good example is when the files on the server vary widely in their popularity. It has been proved [3] that a nearly optimal caching strategy is to group files with similar popularities and ignore caching opportunities among files from different groups. The cache of each user is then divided into several segments, each segment dedicated to one such group. If the number of groups is large, then the cache size dedicated to each group, as well as the number of files within each group will be small. Another case appears when there are few popular television hits, say on ...
In this paper, we propose coded Merkle tree (CMT), a novel hash accumulator that offers a constant-cost protection against data availability attacks in blockchains, even if the majority of the network nodes are malicious. A CMT is constructed using a family of sparse erasure codes on each layer, and is recovered by iteratively applying a peeling-decoding technique that enables a compact proof for data availability attack on any layer. Our algorithm enables any node to verify the full availability of any data block generated by the system by just downloading a Θ(1) byte block hash commitment and randomly sampling Θ(log b) bytes, where b is the size of the data block. With the help of only one connected honest node in the system, our method also allows any node to verify any tampering of the coded Merkle tree by just downloading Θ(log b) bytes. We provide a modular library for CMT in Rust and Python and demonstrate its efficacy inside the Parity Bitcoin client.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.