2006 International Conference on Computational Inteligence for Modelling Control and Automation and International Conference On 2006
DOI: 10.1109/cimca.2006.232
|View full text |Cite
|
Sign up to set email alerts
|

Web Document Clustering with Multi-view Information Bottleneck

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…The general framework includes the IB and PF problems as special cases. We proposed solving the general problem with splitting methods that are capable of solving large-scale problems which connect to the recent trends on multi-view learning [46], [47] and multi-source privacy problems [47], [48], [49], [50], [51].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The general framework includes the IB and PF problems as special cases. We proposed solving the general problem with splitting methods that are capable of solving large-scale problems which connect to the recent trends on multi-view learning [46], [47] and multi-source privacy problems [47], [48], [49], [50], [51].…”
Section: Discussionmentioning
confidence: 99%
“…Following the steps ( 44), ( 46) and (48) in Appendix G, we start from (49). Define L k c := L c (p k , q k , ν k ) the function value evaluated with variables at step k for simplicity:…”
Section: Appendix I Proof Of Lemma 10mentioning
confidence: 99%
“…However, combining all the observations in one giant view will result in an exponential increase in complexity (curse of dimensionality). A basic assumption in the multiview learning literature is the conditional independence [19,10], where the observations of all views {X (i) } V i=1 are independent given the target variable Y . That is, p({x}|y) = V i=1 p(x (i) |y).…”
Section: Multiview Information Bottleneckmentioning
confidence: 99%
“…This corresponds only to the compression part of the IB method and their goal was to maximize the marginal of a linear classifier and was limited to binary tasks. In [10], each view was treated as a single-view IB problem followed by a post-processing stage that aims to maximize the mutual information between the view-specific representations to encourage agreement of clustering hypotheses. More recently, [11] proposed a bottom-up, heuristic multiview IB objective which maximizes view-specific information, view-shared information, and inter-cluster correlation while compressing the observations simultaneously.…”
Section: Introductionmentioning
confidence: 99%