Computability and Logic has become a classic because of its accessibility to students without a mathematical background and because it covers not simply the staple topics of an intermediate logic course, such as Godel's incompleteness theorems, but also a large number of optional topics, from Turing's theory of computability to Ramsey's theorem. This 2007 fifth edition has been thoroughly revised by John Burgess. Including a selection of exercises, adjusted for this edition, at the end of each chapter, it offers a simpler treatment of the representability of recursive functions, a traditional stumbling block for students on the way to the Godel incompleteness theorems. This updated edition is also accompanied by a website as well as an instructor's manual.
Richard Jeffrey is beyond dispute one of the most distinguished and influential philosophers working in the field of decision theory and the theory of knowledge. His work is distinctive in showing the interplay of epistemological concerns with probability and utility theory. Not only has he made use of standard probabilistic and decision theoretic tools to clarify concepts of evidential support and informed choice, he has also proposed significant modifications of the standard Bayesian position in order that it provide a better fit with actual human experience. Probability logic is viewed not as a source of judgment but as a framework for explaining the implications of probabilistic judgments and their mutual compatability. This collection of essays spans a period of some 35 years and includes what have become some of the classic works in the literature. There is also one completely new piece, while in many instances Jeffrey includes afterthoughts on the older essays.
What's a Bayesian? Well, I'm one, for example. But not according to Clark Glymour (1980, pp. 68-69) and some other definers of Bayesianism and personalism, such as Ian Hacking (1967, p. 314) and Isaac Levi (1980, p. xiv). Thus it behooves me to give an explicit account of the species of Bayesianism I espouse (sections 1 and 2) before adding my bit (section 3, with lots ofhelp from my friends) to Daniel Garber's treatment in this volume ofthe problem of new explanation of common knowledge: the so-called problem ofold evidence. With Clark Glymour, I take there to be identifiable canons of good thinking that get used on a large scale in scientific inquiry at its best; but unlike him, I take Bayesianism (what I call "Bayesianism") to do a splendid job of validating the valid ones and appropriately restricting the invalid ones among the commonly cited methodological rules. With Daniel Garber, I think that bootstrapping does well, too-when applied with a tact of which Bayesianism can give an account. But my aim here is to elaborate and defend Bayesianism (of a certain sort), not to attack bootstrapping. Perhaps the main novelty is the further rounding-out in section 3 (by John Etchemendy, David Lewis, Calvin Nonnore, and me) of Daniel Garber's treatment of what I have always seen as the really troubling one of Clark Glymour's strictures against Bayesianism. After that there is a coda (section 4) in which I try to display and explain how probability logic does so much more than truth-value logic. Richard]effrey BAYESIANISM WITH A HUMAN FACE 135 That's one way to use the term "Bayesian," but on that usage I'm no Bayesian. My sort of Bayesianism gets its name from another sense of the term "Bayes's rule," equally apt, but stemming from decision theory, not probability theory proper. Whereas Bayes's rule in Glymour's sense prescribes conditioning as the way to update personal probabilities, Bayes's rule in my sense prescribes what Wald (1950) called "Bayes solutions" to decision problems, i.e., solutions that maximize expected utility relative to some underlying probability assignment to the states of nature. (No Bayesian himself, Wald contributed to the credentials of decision-theoretic Bayesianism by proving that the Bayes solutions form a complete class.) The Reverend Thomas Bayes was both kinds of Bayesian. And ofcourse, he was a third kind of Bayesian, too: a believer in a third sort of Bayes's rule, according to which the right probability function to start with is m* (as Carnap (1945) was to call it). Why am 1 not a Bayesian in Glymour's sense? This question is best answered by way of another: What is the "new evidence" on which we are to condition? (Remember: the senses are not telegraph lines on which the exter~al world sends observation sentences for us to condition upon.) Not just any proposition that newly has probability one will do, for there may well be many of these, relative to which conditioning will yield various posterior probability distributions when applied to the prior. All right, then: what about t...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.