The mallba project tackles the resolution of combinatorial optimization problems using algorithmic skeletons implemented in C ++ . mallba offers three families of generic optimization techniques: exact, heuristic and hybrid. Moreover, for each technique, mallba provides three different implementations: sequential, parallel for local area networks, and parallel for wide area networks. This paper explains the architecture of the mallba library, presents some of the implemented skeletons, and offers several computational results to show the viability of the approach. In our conclusions we claim that the design used to develop the optimization techniques is general and efficient at the same time, and also that the resulting skeletons can outperforms existing algorithms on a plethora of problems.
This paper is about Information Geometry, a relatively new subject within mathematical statistics that attempts to study the problem of inference by using tools from modern differential geometry. This paper provides an overview of some of the achievements and possible future applications of this subject to physics.
Abstract. The ongoing unprecedented exponential explosion of available computing power, has radically transformed the methods of statistical inference. What used to be a small minority of statisticians advocating for the use of priors and a strict adherence to bayes theorem, it is now becoming the norm across disciplines. The evolutionary direction is now clear. The trend is towards more realistic, flexible and complex likelihoods characterized by an ever increasing number of parameters. This makes the old question of: What should the prior be? to acquire a new central importance in the modern bayesian theory of inference. Entropic priors provide one answer to the problem of prior selection. The general definition of an entropic prior has existed since 1988 [1], but it was not until 1998 [2] that it was found that they provide a new notion of complete ignorance. This paper re-introduces the family of entropic priors as minimizers of mutual information between the data and the parameters, as in [2], but with a small change and a correction. The general formalism is then applied to two large classes of models: Discrete probabilistic networks and univariate finite mixtures of gaussians. It is also shown how to perform inference by efficiently sampling the corresponding posterior distributions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.