We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of O(1/γ)) and lower bound (of Ω p (log(1/γ))) for the list-size needed to list decode up to error fraction p with rate γ away from capacity, i.e., 1 − h(p) − γ (here p ∈ (0, 1 2 ) and γ > 0). Our main result is the following:• We prove that in any binary code C ⊆ {0, 1} n of rate 1 − h(p) − γ, there must exist a set L ⊂ Cof Ω p (1/ √ γ) codewords such that the average distance of the points in L from their centroid is at most pn. In other words, there must exist Ω p (1/ √ γ) codewords with low "average radius."The standard notion of list-decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average-radius form is in itself quite natural; for instance, the classical Johnson bound in fact implies average-radius listdecodability.The remaining results concern the standard notion of list-decoding, and help clarify the current state of affairs regarding combinatorial bounds for list-decoding:• We give a short simple proof, over all fixed alphabets, of the above-mentioned Ω p (log(1/γ)) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky.• We show that one cannot improve the Ω p (log(1/γ)) lower bound via techniques based on identifying the zero-rate regime for list-decoding of constant-weight codes (this is a typical approach for negative results in coding theory, including the Ω p (log(1/γ)) list-size lower bound). On a positive note, our Ω p (1/ √ γ) lower bound for average-radius list-decoding circumvents this barrier.• We exhibit a "reverse connection" between the existence of constant-weight and general codes for list-decoding, showing that the best possible list-size, as a function of the gap γ of the rate to the capacity limit, is the same up to constant factors for both constant-weight codes (with weight bounded away from p) and general codes.• We give simple second moment based proofs that w.h.p. a list-size of Ω p (1/γ) is needed for listdecoding random codes from errors as well as erasures. For random linear codes, the corresponding list-size bounds are Ω p (1/γ) for errors and exp(Ω p (1/γ)) for erasures.
We present a construction of subspace codes along with an efficient algorithm for list decoding from both insertions and deletions, handling an information-theoretically maximum fraction of these with polynomially small rate. Our construction is based on a variant of the folded ReedSolomon codes in the world of linearized polynomials, and the algorithm is inspired by the recent linear-algebraic approach to list decoding [4]. Ours is the first list decoding algorithm for subspace codes that can handle deletions; even one deletion can totally distort the structure of the basis of a subspace and is thus challenging to handle. When there are only insertions, we also present results for list decoding subspace codes that are the linearized analog of Reed-Solomon codes (proposed in [15,8], and closely related to the Gabidulin codes for rank-metric), obtaining some improvements over similar results in [10].
We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of O(1/γ)) and lower bound (of Ω p (log(1/γ))) for the list-size needed to list decode up to error fraction p with rate γ away from capacity, i.e., 1 − h(p) − γ (here p ∈ (0, 1 2 ) and γ > 0). Our main result is the following:• We prove that in any binary code C ⊆ {0,codewords such that the average distance of the points in L from their centroid is at most pn. In other words, there must exist Ω p (1/ √ γ) codewords with low "average radius."The standard notion of list-decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average-radius form is in itself quite natural; for instance, the classical Johnson bound in fact implies average-radius listdecodability.The remaining results concern the standard notion of list-decoding, and help clarify the current state of affairs regarding combinatorial bounds for list-decoding:• We give a short simple proof, over all fixed alphabets, of the above-mentioned Ω p (log(1/γ)) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky.• We show that one cannot improve the Ω p (log(1/γ)) lower bound via techniques based on identifying the zero-rate regime for list-decoding of constant-weight codes (this is a typical approach for negative results in coding theory, including the Ω p (log(1/γ)) list-size lower bound). On a positive note, our Ω p (1/ √ γ) lower bound for average-radius list-decoding circumvents this barrier.• We exhibit a "reverse connection" between the existence of constant-weight and general codes for list-decoding, showing that the best possible list-size, as a function of the gap γ of the rate to the capacity limit, is the same up to constant factors for both constant-weight codes (with weight bounded away from p) and general codes.• We give simple second moment based proofs that w.h.p. a list-size of Ω p (1/γ) is needed for listdecoding random codes from errors as well as erasures. For random linear codes, the corresponding list-size bounds are Ω p (1/γ) for errors and exp(Ω p (1/γ)) for erasures.
Let G be a (p, q) graph. Let f be a map from V (G) to {1, 2,. .. , p}. For each edge uv, assign the label |f (u) − f (v)|. f is called a difference cordial labeling if f is a one to one map and e f (0) − e f (1) ≤ 1 where e f (1) and e f (0) denote the number of edges labeled with 1 and not labeled with 1 respectively. A graph with admits a difference cordial labeling is called a difference cordial graph. In this paper, we investigate the difference cordial labeling behavior of triangular snake, Quadrilateral snake, double triangular snake, double quadrilateral snake and alternate snakes. 2010 AMS Mathematics Subject Classification : 05C78.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.