Tag Archives: Bayesian

What’s the difference between data and evidence

This is a question that came up while I was writing a talk about the difficulties that might be encountered when translating evidence policies from one context to another for my home department’s Annual Research Day a year or so ago. You can find a copy of the slides here.

The plan was to say something about the way that EBM has influenced non-medical decision-making. The original rationale for EBM was a) to de-emphasise individual judgement, based on clinical experience, as a sufficient foundation for making care decisions and b) to instead base care decisions on evidence, particularly that arising from clinical trials. To quote perhaps the most widely-cited paper on the subject, EBM is the:

“conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” (Sackett et al 1996)

However, a cursory glance at the topics of article citing Sackett – all 9845 of them, at the time of writing – suggest that there is a growing interest in exporting this method of making decisions far outside the original context of medicine. These include papers on education policysocial work and – most interesting of all – dealing with architecture as a means of crime control. While an analysis of the reasons for this wide circulation are fascinating (and hopefully the subject of a later post), they’re a bit beyond what I want to talk about here. Instead, I want to simply claim that EBM’s tools and tactics have had a really wide circulation in the last 10 years or so, with the most visible new locus of practice in the evidence-based policy (EBP) movement.

Yet this change in application poses tough questions about translation. How should EBM – a method that depends on practices that are pretty specific to medicine – be modified to give useful answers to those making decisions in other contexts? A further puzzle concerns the role of philosophers of science in all this. While there are many questions here that might benefit from a philosophical treatment of of one kind or another, the contribution from philosophers have not been terribly helpful to this conversation.Given that I really believe that philosophers can and do meaningfully contribute to this kind of conversation, I will conclude by suggesting a few ways that we might provide a more useful (and more critical) contribution to the philosophy of evidence-based something. To illustrate this, I’d like to talk about one specific question thrown up by the circulation of practices from EBM to EBP. This starts with an ostensibly simple question: what’s the difference between data and evidence?

The data-evidence distinction

Why care about this distinction? Well, it appears to be one that gets made very frequently in EBP. We can find lots of examples of practitioners making distinctions between data and evidence. My quick web search this afternoon threw up examples including one by the UN’s Data Unity Network, or the South Downs National Park Authority or the Marine Management Organisation.

But it’s not very clear from these examples exactly how this distinction gets made. Is the distinction something that comes over to EBP from EBM? Well, I think the short answer here is ‘no’. I can’t find a detailed analysis of any such data/evidence distinction in the EBM literature. However, my intuition (and perhaps one that I might be able to defend if pushed) is something like this: EBM proponents typically claim that evidence alone should be used when making decisions about healthcare (look at the Sackett quote above). Yet this evidence often depends on data gathered during, for instance, clinical trials. Here then, data and evidence can be locally distinguished. Information about individual trial subjects is data. But once aggregated via appropriate statistical work, and reported as the result of a trial, it becomes evidence, which can then be used to address a clinical question.

This local distinction isn’t very helpful outside EBM. Perhaps because EBP decisions often involve looking at processes only measurable at a group level (in economics, for instance), the EBM distinction between individual data and group evidence is unlikely to be applicable. So the data/evidence distinction that is being made in the examples above can’t just be made in the same way as it is in EBM. Can we find some more general way of distinguishing data from evidence by looking at the literature on the philosophy of evidence?

Philosophers and the data-evidence distinction

Well, at the outset, looking to philosophers of science for help with this question appears promising. There is a great deal of philosophical work on evidence, and some of it contains distinctions between data and evidence. Perhaps it might be possible to translate some of this work to the EBP context? Let’s take a closer look at some of this philosophical work. I’ve picked a pair of ways of making the data-evidence distinction that have appeared in the philosophy of probability literature:

Mayo’s error-statistical philosophy of evidence

Mayo’s idea is that evidence describes a special sub-set of our data. More precisely, when a particular hypothesis is tested using a particular set of data (arising from a clinical trial, say), that data becomes evidence in relation to a particular hypothesis.

data x are evidence for a hypothesis H to the extent that H passes a severe test with x. (Mayo 2004: 79)

This seems a pretty plausible way of making the data/evidence distinction that might be suitable for either EBM or EBP.

Subjective Bayesian view of evidence

This view essentially distinguishes data from evidence by defining evidence in a way that (negatively) defines evidence. Here, the primitive concept is the acceptance of some evidential statement. Anything that leads to that statement is (basically) irrelevant, or at least not defined. For us, this might well include data.

The Bayesian theory of support is a theory of how the acceptance as true of some evidential statement affects your belief in some hypothesis. How you came to accept the truth of the evidence, and whether you are correct in accepting it as true, are matters that, from the point of view of the theory, are simply irrelevant. (Howson and Urbach 1993: 419)

Here, then, the idea is that evidence is constituted by those statements that affect belief in some hypothesis. Everything that leads to these statements – data, for example – is lumped together as an irrelevance. Like Mayo’s distinction, this also seems a pretty plausible way of making the data/evidence distinction that might be suitable for either EBM or EBP.

So what’s the problem?

Given that both ways of distinguishing data and evidence seem (at least) plausible, which should we prefer to use in practice? For the examples cited, this is where things start to get a bit tricky. As I’ve hinted above, each of these distinctions is rooted in a different theory of probability. Mayo’s distinction comes from the frequentist Neyman-Pearson tradition, while Howson and Urbach’s comes from subjective Bayesianism. Given that both methods appear to provide us with a means of making clear distinctions between data and evidence, the decision about how to make this distinction presumably follows from an earlier decision to adopt one or other general theory of probability.

But picking a general theory of probability is no small matter, either philosophically (see Gillies 2000 for background) or practically. At the very least, the choice of theory shapes the kinds of statistical methods that are appropriate, leading to all kinds of implications for experimental design and so on. And suggesting that we decide how to distinguish data from evidence by first deciding on a general theory of probability is not terribly helpful either (in any case, these kinds of discussions usually regress into ‘theory x is better than theory y‘ foot-stamping). So it is not clear to me just which way of making the distinction we should prefer. However, a more local conclusion is a bit more positive: any distinction that we draw between data and evidence should probably follow whichever general theory of probability is in use.


Gillies, D. 2000. Philosophical Theories of Probability. Routledge.

Howson and Urbach 1993. Scientific Reasoning: the Bayesian Approach. Open Court

Mayo 2004. “An Error-Statistical Philosophy of Evidence,” in Taper and Lele (eds.) The Nature of Scientific Evidence: Statistical, Philosophical and Empirical Considerations. University Of Chicago Press: 79-118.

Sackett, D., Rosenberg, W., Gray, J., Haynes, R., and Richardson, W. 1996. Evidence based medicine: what it is and what it isn’tBritish Medical Journal312(7023): 71-2.