2008 42nd Asilomar Conference on Signals, Systems and Computers 2008
DOI: 10.1109/acssc.2008.5074810
|View full text |Cite
|
Sign up to set email alerts
|

On the entropy region of discrete and continuous random variables and network information theory

Abstract: Abstract-We show that a large class of network information theory problems can be cast as convex optimization over the convex space of entropy vectors. A vector in 2 n − 1 dimensional space is called entropic if each of its entries can be regarded as the joint entropy of a particular subset of n random variables (note that any set of size n has 2 n − 1 nonempty subsets.) While an explicit characterization of the space of entropy vectors is wellknown for n = 2, 3 random variables, it is unknown for n > 3 (which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
(26 reference statements)
0
1
0
Order By: Relevance
“…One of the most commonly used probability distribution evaluation function is entropy. [6] The definition of information theory entropy [10] is defined as, [1] …”
Section: Optimised Set Reductionmentioning
confidence: 99%
“…One of the most commonly used probability distribution evaluation function is entropy. [6] The definition of information theory entropy [10] is defined as, [1] …”
Section: Optimised Set Reductionmentioning
confidence: 99%