Foundations of Probability Articles

Jon Williamson: How uncertain do we need to be? Erkenntnis 79(6):1249-1271, 2014. Published version: . Local version: . Video: doi 10.1007/s10670-013-9516-6

Expert probability forecasts can be useful for decision making (§1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or higher order uncertainty (§2). Bayesian epistemology provides the means to halt this escalator, by tying expressions of uncertainty to the propositions expressible in an agent’s language (§3). But Bayesian epistemology comes in three main varieties. Strictly subjective Bayesianism and empirically-based subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making (§4). On the other hand, objective Bayesianism can justify the use of these probabilities, at least when the probabilities are consistent with the agent’s evidence (§5). Hence objective Bayesianism offers the most promise overall for explaining how testimony of uncertainty can be useful for decision making.

Interestingly, the objective Bayesian analysis provided in §5 can also be used to justify a version of the Principle of Reflection (§6).


Jürgen Landes & Jon Williamson: Objective Bayesianism and the Maximum Entropy Principle, Entropy 15(9): 3528-3591, 2013. doi:10.3390/e15093528

Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy.

Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.


Jon Williamson: Why Frequentists and Bayesians Need Each Other, Erkenntnis 78:293-318, 2013. doi: 10.1007/s10670-011-9317-8.

The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed – two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.


Jon Williamson: Calibration and Convexity: Response to Gregory Wheeler, British Journal for the Philosophy of Science 63:851-857, 2012.

This note responds to some criticisms of my recent book In Defence of Objective Bayesianism that were provided by Gregory Wheeler in his ‘Objective Bayesian Calibration and the Problem of Non-convex Evidence’ [available here].


Jon Williamson: An objective Bayesian account of confirmation, in Dennis Dieks, Wenceslao J. Gonzalez, Stephan Hartmann, Thomas Uebel, Marcel Weber (eds), `Explanation, Prediction, and Confirmation. New Trends and Old Ones Reconsidered’, The philosophy of science in a European perspective Volume 2, Springer, 2011, pp. 53-81;

This paper revisits Carnap’s theory of degree of confirmation, identifies certain shortcomings, and argues that a new approach based on objective Bayesian epistemology can overcome these shortcomings.


Jon Williamson: Bruno de Finetti: Philosophical lectures on probability, Philosophia Mathematica 18(1): 130-135, 2010;


Jon Williamson: Epistemic complexity from an objective Bayesian perspective, in A. Carsetti (ed.) `Causality, meaningful complexity and embodied cognition’, Springer, pp. 231-246, 2010;

Evidence can be complex in various ways: e.g., it may exhibit structural complexity, containing information about causal, hierarchical or logical structure as well as empirical data, or it may exhibit combinatorial complexity, containing a complex combination of kinds of information. This paper examines evidential complexity from the point of view of Bayesian epistemology, asking: how should complex evidence impact on an agent’s degrees of belief? The paper presents a high-level overview of an objective Bayesian answer: it presents the objective Bayesian norms concerning the relation between evidence and degrees of belief, and goes on to show how evidence of causal, hierarchical and logical structure lead to natural constraints on degrees of belief. The objective Bayesian network formalism is presented, and it is shown how this formalism can be used to handle both kinds of evidential complexity – structural complexity and combinatorial complexity.


Jon Williamson: Objective Bayesianism, Bayesian conditionalisation and voluntarism, Synthese, 178(1): 67-85, 2011;

Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reduction: in certain circumstances one can Dutch book an agent however she changes her degrees of belief.

One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief.


Jon Williamson: Objective Bayesianism with predicate languages, Synthese 163(3), pp. 341-356, 2008;

Objective Bayesian probability is normally defined over rather simple domains, e.g., finite event spaces or propositional languages. This paper investigates the extension of objective Bayesianism to first-order logical languages. It is argued that the objective Bayesian should choose a probability function, from all those that satisfy constraints imposed by background knowledge, that is closest to a particular frequency-induced probability function which generalises the lambda=0 function of Carnap’s continuum of inductive methods.


Jon Williamson: Inductive influence, British Journal for the Philosophy of Science 58, pp. 689-708, 2007;

Objective Bayesianism has been criticised for not allowing learning from experience: it is claimed that an agent must give degree of belief 1/2 to the next raven being black, however many other black ravens have been observed. I argue that this objection can be overcome by appealing to *objective Bayesian nets*, a formalism for representing objective Bayesian degrees of belief. Under this account, previous observations exert an *inductive influence* on the next observation. I show how this approach can be used to capture the Johnson-Carnap continuum of inductive methods, as well as the Nix-Paris continuum, and show how inductive influence can be measured.


Jon Williamson: Objective Bayesian nets, in S. Artemov, H. Barringer, A. S. d’Avila Garcez, L. C. Lamb, and J. Woods (eds.): We Will Show Them: Essays in Honour of Dov Gabbay, Vol 2., pp. 713-730, College Publications, 2005;

I present a formalism that combines two methodologies: *objective Bayesianism* and *Bayesian nets*. According to *objective Bayesianism*, an agent’s degrees of belief (i) ought to satisfy the axioms of probability, (ii) ought to satisfy constraints imposed by background knowledge, and (iii) should otherwise be as non-committal as possible (i.e. have maximum entropy). *Bayesian nets* offer an efficient way of representing and updating probability functions. An *objective Bayesian net* is a Bayesian net representation of the maximum entropy probability function.
I show how objective Bayesian nets can be constructed, updated and combined, and how they can deal with cases in which the agent’s background knowledge includes knowledge of qualitative *influence relationships*, e.g. causal influences. I then sketch a number of applications of the resulting formalism, showing how it can shed light on probability logic, causal modelling, logical reasoning, semantic reasoning, argumentation and recursive modelling.


Jon Williamson: Motivating objective Bayesianism: from empirical constraints to objective probabilities, in William L. Harper and Gregory R. Wheeler (eds.): Probability and Inference: Essays in Honor of Henry E. Kyburg Jr. London: College Publications, 2007, pp. 155-183;

Kyburg goes half-way towards objective Bayesianism. He accepts that frequencies constrain rational belief to an interval but stops short of isolating an optimal degree of belief within this interval. I examine the case for going the whole hog.


Jon Williamson: Philosophies of probability, in Andrew Irvine (ed.): Handbook of the Philosophy of Mathematics, Volume 4 of the Handbook of the Philosophy of Science, North-Holland, 2009, pp. 493–533;

This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.


Jon Williamson: Maximising entropy efficiently, Electronic Transactions in Artificial Intelligence 6, 2002;

Determining a prior probability function via the maximum entropy principle can be a computationally intractable task. However one can easily determine – in advance of entropy maximisation – a list of conditional independencies that the maximum entropy function will satisfy. These independencies can be used to reduce the complexity of the entropy maximisation task. In particular, one can use these independencies to construct a direct acyclic graph in a Bayesian network, and then maximise entropy with respect to the numerical parameters of this network. This can result in an efficient representation of a prior probability function, and one that may allow efficient updating and marginalisation. The computational complexity of maximising entropy can be further reduced when knowledge of causal relationships is available. Moreover, the proposed simplification of the entropy maximisation task may be exploited to construct a proof theory for probabilistic logic.


Jon Williamson: Bayesianism and language change, Journal of Logic, Language and Information, 12(1), 2003, pp. 53-97.

Bayesian probability is normally defined over a fixed language or event space. But in practice language is susceptible to change, and the question naturally arises as to how Bayesian degrees of belief should change as language changes. I argue here that this question poses a serious challenge to Bayesianism. The Bayesian may be able to meet this challenge however, and I outline a practical method for changing degrees of belief over changes in finite propositional languages.


Jon Williamson & David Corfield: Bayesianism into the 21st century, in David Corfield & Jon Williamson (eds): `Foundations of Bayesianism‘, Kluwer Applied Logic Series, 2001, pp.1-16.


Jon Williamson: Countable additivity and subjective probability, British Journal for the Philosophy of Science 50(3), 1999, pp. 401-416.

While there are several arguments on either side, it is far from clear as to whether or not countable additivity is an acceptable axiom of subjective probability.I focus here on de Finetti’s central argument against countable additivity and provide a new Dutch book proof of the principle, to argue that if we accept the Dutch book foundations of subjective probability, countable additivity is an unavoidable constraint.


Jon Williamson: Foundations for Bayesian networks , in David Corfield & Jon Williamson (eds):Foundations of Bayesianism, Kluwer Applied Logic Series, 2001, pp. 75-115. Presented at Bayesianism 2000 (May 11-12 2000).

Bayesian networks may either be treated purely formally or be given an interpretation. I argue that current foundations are problematic, and put forward new foundations which involve aspects of both the interpreted and the formal approaches.