1998
DOI: 10.1103/physreve.58.2302
|View full text |Cite
|
Sign up to set email alerts
|

Mean-field theory of Boltzmann machine learning

Abstract: I present a mean-field theory for Boltzmann machine learning, derived by employing Thouless-Anderson-Palmer free energy formalism to a full extent. Using the Plefka expansion an extended theory that takes higher-order correction to mean-field free energy formalism into consideration is presented, from which the mean-field approximation of general orders, along with the linear response correction, are derived by truncating the Plefka expansion up to desired orders. A theoretical foundation for an effective tric… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

5
203
0

Year Published

2000
2000
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 157 publications
(208 citation statements)
references
References 13 publications
5
203
0
Order By: Relevance
“…4 provides another way to score ligands. However, the classification accuracy does not significantly improve if the energy is estimated using the leading-order mean-field approximation (37). Although interpreting our algorithm in terms of a binding energy function requires experimental verification through binding energy measurements, we note that this interpretation offers several conceptual insights.…”
Section: Physical Modelmentioning
confidence: 86%
“…4 provides another way to score ligands. However, the classification accuracy does not significantly improve if the energy is estimated using the leading-order mean-field approximation (37). Although interpreting our algorithm in terms of a binding energy function requires experimental verification through binding energy measurements, we note that this interpretation offers several conceptual insights.…”
Section: Physical Modelmentioning
confidence: 86%
“…As a general remark, note that the parameter estimation involves two kinds of approximations: one in the inverse problem formulas (5) and (6), that require m and χ, i.e.…”
Section: Inverse Problem For the Curie-weiss Modelmentioning
confidence: 99%
“…This amounts to study how to infer the parameters of a model starting from the observation of real data. In particular, the application of the inverse Ising model, although known for a long time as Boltzmann machine learning [5,6], has aroused interest in recent years in many different fields (physics [1,2], neuroscience [7,8], biology [9,10], social and health sciences [11][12][13][14]), especially since the advent of the big-data age. In these applications, stemming from the assumption that the real world system of interest is described by an Ising model with hamiltonian H, the inverse problem amounts to fit H to the system, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…Statistical physicists have contributed significantly to building approximate learning methods for the equilibrium Ising model. Among them the most studied are Naïve Mean Field [6,7], Thouless-Anderson-Palmer approximation (TAP, i.e., first and second order Plefka expansion) [8] and a message passing algorithm called Belief Propagation [9,10,11]. Nevertheless the assumption of symmetric connectivity in biological networks is not realistic, and making it can lead to incorrect identification of interactions.…”
Section: Introductionmentioning
confidence: 99%