2018
DOI: 10.1109/tnnls.2017.2664100
|View full text |Cite
|
Sign up to set email alerts
|

A Confident Information First Principle for Parameter Reduction and Model Selection of Boltzmann Machines

Abstract: Abstract-Typical dimensionality reduction (DR) methods are data-oriented, focusing on directly reducing the number of random variables (or features) while retaining the maximal variations in the high-dimensional data. Targeting unsupervised situations, this paper aims to address the problem from a novel perspective and considers model-oriented dimensionality reduction in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called Confident-Inf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…Two coordinate parameters ξ i and ξ j are called orthogonal if and only if their Fisher information vanishes, i.e., g ij = 0, meaning that their influences on the log likelihood function are uncorrelated. Based on G η and G θ , we can calculate the Fisher information matrix G ζ for the mixed coordinates [ζ] l [13]:…”
Section: The Fisher Information Matrixmentioning
confidence: 99%
See 1 more Smart Citation
“…Two coordinate parameters ξ i and ξ j are called orthogonal if and only if their Fisher information vanishes, i.e., g ij = 0, meaning that their influences on the log likelihood function are uncorrelated. Based on G η and G θ , we can calculate the Fisher information matrix G ζ for the mixed coordinates [ζ] l [13]:…”
Section: The Fisher Information Matrixmentioning
confidence: 99%
“…Specifically, we would need an efficient approach to recognize how much a unit or connection is useful for revealing the underlying structure of the current data. Recently, a general parametric reduction criterion, named the Confident-Information-First principle (CIF) [13], has been proposed, in the theoretical framework of Information Geometry (IG) [14]. From a model selection perspective, they proved that both the fully Visible Boltzmann Machine (VBM) and the Boltzmann Machine (BM) with hidden units can be derived from the general multivariate binary distribution using the CIF principle.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, we can give strong term associations more influence. Formally, we define a weight when calculating the query likelihood for node andnode : (6) With this query likelihood, we combine the original query terms and extended query terms with proper and automatically learned weight. This extended query not only considers the single words, but also the associations between query terms.…”
Section: Query Expansion With Relevance Bmmentioning
confidence: 99%
“…Recently, A confident information First (CIF) Principle [6] is proposed for parameter reduction criterion. The proposed CIF is fundamentally different from the traditional feature reduction method.…”
Section: Improve the Efficiency Of Rdbm In Query Expansionmentioning
confidence: 99%
See 1 more Smart Citation