Thursday 11th and Friday 12th May 2000

Council Room, King’s College, Strand, London, UK

Organised by philosophy.ai (http://www.kcl.ac.uk/philosophy.ai)

in association with King’s College London Centre for Philosophical Studies

*** Judea Pearl was going to speak, but he is unable to come due to a medical problem in the family. The other talks on Thursday have been moved forward. ***

### Timetable

**Thursday 11 May**

*Bayesianism and AI*

**09.30-11.00** Peter Williams (Sussex): “Probabilistic Learning Models”

The paper will review the use of Bayesian methods in supervised and unsupervised learning in artificial intelligence. A key issue is the need to find a balance between model complexity and information content of the data. Examples of the Bayesian approach will be discussed.

**11.30-13.00** Jon Williamson (King’s College London): “Foundations for Bayesian networks”

Bayesian networks may either be treated purely formally or be given an interpretation. I argue that many interpretations are untenable, and put forward new foundations which involve aspects of both the interpreted and the formal approaches.

*History of Bayesanism*

**14.00-15.30** Maria Carla Galavotti (Bologna): “Subjectivism and Objectivity in Bruno De Finetti’s Bayesianism”

The paper will focus on Bruno de Finetti’s position, which combines Bayesianism with a strictly subjective interpretation of probability. For de Finetti probability is always subjective and expresses the degree of belief of the evaluating subject. His perspective does not accommodate a notion of ‘objective chance’ in the way other Bayesians like Harold Jeffreys or Frank Ramsey do. For de Finetti the only problem is that of the evaluation of probabilities in a Bayesian framework. His ideas on this point will be recollected.

*Bayesianism in Mathematics and Science*

**16.00-17.30** David Corfield (King’s College London): “Bayesianism in Mathematics”

In this paper I shall discuss the idea of viewing plausible mathematical reasoning in Bayesian terms. I argue that a Bayesian account of scientific reasoning requires that it be extended to mathematics. However, the price to pay for such an extension is a weakening of the tenet which holds that logically equivalent sentences must be believed with equal confidence. I discuss George Polya’s Bayesian account of mathematical reasoning and consider the prospects for developing it into a useful representation of uncertainty in mathematics.

**Friday 12 May**

*Bayesianism in Mathematics and Science (cont.)*

**09.30-11.00** Colin Howson (London School of Economics): “The Logical Basis of Uncertainty”

Down to this century many workers in epistemic probability believed that they were doing logic. This is an idea that has been largely abandoned following the failure of Carnap’s Logical Foundations of Probability progamme. A new way of displaying the laws of epistemic probability as logic was proposed by Ramsey, but it was never properly followed up. I believe that Ramsey’s programme not only can be completed, but that the sort of logical inyterpretation he proposed has great problem-solving power.

**11.30-13.00** Philip Dawid (University College London): “Subjective Beliefs and the Empirical World”

Bayesianism, like classical Logic, is a system for keeping one’s internal beliefs self-consistent. Neither theory is concerned with whether those beliefs are in any sense “true” beliefs about the real world. Other theories, such as Popper’s, do attempt to address the relationship between theory and the empirical world. I shall argue that there is a need for both approaches, and outline a “Popperian Bayesian” philosophy of science, based on the concept of probability calibration.

*Criticisms of Bayesianism*

**14.00-15.30** Deborah Mayo (Virginia Tech): “Taking Bayesian Criticisms Seriously”

The predominance of Bayesian statistics in philosophy of science has grown out of long-standing criticisms of non-Bayesian (error statistical) methods. So well-entrenched are these criticisms and the assumptions upon which they are based, that many contemporary philosophers of science are largely unaware of the nature, roles, and importance of non-Bayesian (Neyman-Pearson, Fisherian) methods in scientific methodology. Where there are controversies in the use of things like statistical significance tests (e.g., in psychology)–controversies which cry out for careful philosophical analysis–philosophers of science have too rarely been on the scene, and where error-statistical insights could shed light on philosopher’s conundrums (e.g., about evidence, methodology, causal inference) few philosophers are in a position to make use of them. Perhaps it will help to take the Bayesian criticisms of error-statistical methods seriously, and that is the goal of this paper. Three broad categories of criticisms concern: (1) how to interpret “decisions” to accept or reject hypotheses “evidentially”; (2) how to justify “subjective” choices of error probabilities of tests; and (3) how to justify violating the Likelihood Principle. In responding to these I hope to pinpoint the fundamental contrasts between Bayesian and error-statistical philosophies of evidence.

**16.00-17.30** Max Albert (Koblenz-Landau): “Bayesian learning and expectations formation: anything goes”

The Bayesian approach to learning and expectations formation implies no proposition about rational agents that could conceivably be refuted by observation. Even if an agent’s von Neumann-Morgenstern utility function is given, any strategy can still be rationalized by a suitable choice of priors. The paper discusses this result and its implications for Bayesianism as a positive economic theory and as a normative theory.

### Attendance

Registration fee: £30 sterling before 31 March 2000, £40 sterling thereafter. Please send cheques payable to “King’s College London” to Jon Williamson & David Corfield, Department of Philosophy, King’s College, WC2R 2LS, UK. All enquiries to philosophy.ai@kcl.ac.uk.