2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS) 2017
DOI: 10.1109/focs.2017.39
|View full text |Cite
|
Sign up to set email alerts
|

Learning Graphical Models Using Multiplicative Weights

Abstract: We give a simple, multiplicative-weight update algorithm for learning undirected graphical models or Markov random fields (MRFs). The approach is new, and for the well-studied case of Ising models or Boltzmann machines, we obtain an algorithm that uses a nearly optimal number of samples and has running timeÕ(n 2 ) (where n is the dimension), subsuming and improving on all prior work. Additionally, we give the first efficient algorithm for learning Ising models over non-binary alphabets.Our main application is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
120
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 69 publications
(124 citation statements)
references
References 18 publications
4
120
0
Order By: Relevance
“…In the case of logistic regression, there has been a lot of work showing that under certain hightemperature conditions on the Ising model (which are similar to the assumptions we make in our paper), one can perform many statistical tasks such as learning, testing and sampling of Ising models efficiently [28,14,13,15,25,17].…”
Section: Modeling Dependencementioning
confidence: 70%
“…In the case of logistic regression, there has been a lot of work showing that under certain hightemperature conditions on the Ising model (which are similar to the assumptions we make in our paper), one can perform many statistical tasks such as learning, testing and sampling of Ising models efficiently [28,14,13,15,25,17].…”
Section: Modeling Dependencementioning
confidence: 70%
“…In any case, what this means for our lower bound is that without ferromagneticity, even RBMs with a constant number of latent variables of constant degree inherits the hardness results of learning MRFs [7,24], that in turn follow from the popular assumption that learning sparse parities with noise is hard. For comparison, the technique used in [29] seems insufficient for this reduction -their method can only build certain noiseless functions.…”
Section: Our Resultsmentioning
confidence: 99%
“…In order to learn the two-hop structure of an RBM it will be necessary to have lower and upper bounds on the edge weights of the model, so we introduce the following notion of degeneracy. This is a standard assumption in the literature on learning Ising models [6,40,24]. In particular, a lower bound is needed because otherwise it would be impossible to distinguish a non-edge from an edge with an arbitrarily weak interaction.…”
Section: The Learning Problem For Rbmsmentioning
confidence: 99%
See 2 more Smart Citations