s A Bayesian network is a compact, expressive representation of uncertain relationships among parameters in a domain. In this article, I introduce basic methods for computing with Bayesian networks, starting with the simple idea of summing the probabilities of events of interest. The article introduces major current methods for exact computation, briefly surveys approximation methods, and closes with a brief discussion of open issues.I n rare cases, primarily involving terminological information or other artificially constructed domains, one has the opportunity to determine categorically the truth of a proposition based on prior knowledge and current observation. Often, truth is more elusive, and categorical statements can only be made by judgment of the likelihood or other ordinal attribute of competing propositions. Probability theory is the oldest and best-understood theory for representing and reasoning about such situations, but early AI experimental efforts at applying probability theory were disappointing and only confirmed a belief among AI researchers that those who worried about numbers were "missing the point." 1 The point, so succinctly stated in Newell and Simon's physical symbol system hypothesis, 2 was that structure was the key, not the numeric details. The problem: The core around which a probabilistic approach revolves is the joint-probability distribution (JPD). Unfortunately, for domains described by a set of discrete parameters, the size of this object and the complexity of reasoning with it directly can both be exponential in the number of parameters.A popular simplification was the naïve Bayes's model. This model assumes that the probability distribution for each observable parameter (that is, the probability of each value in the domain of the parameter) depends only on the root cause and not on the other parameters. Simplifying assumptions such as naïve Bayes's permitted tractable reasoning but were too extreme: They again provided no mechanism for representing the qualitative structure of a domain.About 10 years ago, probability, and especially decision theory, began to attract renewed interest within the AI community, which was the result of a felicitous combination of obstacle and opportunity: The issue of ordering possible beliefs, both for belief revision and for action selection, was seen as increasingly important and problematic, and at the same time, dramatic new developments in computational probability and decision theory directly addressed perceived shortcomings. The key development was the discovery that a relationship could be established between a welldefined notion of conditional independence in probability theory and the absence of arcs in a directed acyclic graph (DAG). This relationship made it possible to express much of the structural information in a domain independently of the detailed numeric information, in a way that both simplifies knowledge acquisition and reduces the computational complexity of reasoning. The resulting graphic models have come to be known as Ba...