2022 17th Canadian Workshop on Information Theory (CWIT) 2022
DOI: 10.1109/cwit55308.2022.9817669
|View full text |Cite
|
Sign up to set email alerts
|

Rényi Fair Information Bottleneck for Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…for α > 0, α = 1 and distributions P and Q with common support X . 1 Using Rényi divergence gives an extra degree of freedom and allows more control over the compression term I(X; Z). As the Rényi divergence is non-decreasing with α, a higher α will more strongly force the distribution P Z|X closer to Q Z , resulting in more compression.…”
Section: Variational Boundsmentioning
confidence: 99%
See 1 more Smart Citation
“…for α > 0, α = 1 and distributions P and Q with common support X . 1 Using Rényi divergence gives an extra degree of freedom and allows more control over the compression term I(X; Z). As the Rényi divergence is non-decreasing with α, a higher α will more strongly force the distribution P Z|X closer to Q Z , resulting in more compression.…”
Section: Variational Boundsmentioning
confidence: 99%
“…To compute the bounds in practice we use the reparameterization trick [56]. Modeling P Z|X as a density, we let P Z|X dZ = P E dE, where E is a random variable and Z = f (X, E) is a deterministic function, allowing us to backpropagate gradients 1 If P and Q are probability density functions, then Dα(P ||Q)…”
Section: Computing the Boundsmentioning
confidence: 99%
“…In supervised learning, the information bottleneck maximizes the compression of redundant information between the input and the output while preserving the mutual information about the desired output. Based on this concept, researchers have proposed a range of machine learning algorithms that have been extensively used in various fields including image, speech, and natural language processing [2][3][4][5] .…”
Section: Introductionmentioning
confidence: 99%