Author Archives: phyllis.illari@ucl.ac.uk

New knowledge and what to measure

I have been struck this week about how issues to do with evidence of mechanism have arisen in all my classes, and how students from multiple scientific backgrounds have set about attempting to understand evidence in science in very different ways.

On my masters course, students studying causality were particularly looking at evidential pluralism in health, at the idea that there were many ways in which evidence of correlation even from well-conducted large-scale observational studies or randomised controlled trials could mislead us, and evidence of mechanism could be used as a useful complement.

My masters students come from backgrounds in medicine, biochemistry, geology and social science, and were struck by what a complicated integrated thing evidence of causality is.  Discussion ranged over issues that cross different sciences, before homing in on the question, pertinent to all trials, of what to measure and how to measure it.

We began with looking at historical Medical Research Council trials of streptomycin, and why they chose, during the trials, to study X-rays of patients’ lungs, and also to test bacteria in the treatment group to see whether they were developing resistance to streptomycin over the course of the 4 months of treatment.  It is difficult to see how such choices could be made except in terms of a background theory of the mechanism of action of both disease and cure.  A student was arguing, further, that such choices are always contextual, because there is never going to be a one-size-fits-all story for when *no evidence of mechanism* counts as *evidence of no mechanism* – for when you have looked well enough for a hypothesised mechanism to be sure that it is not there.

Discussion then ranged to more recent attempts to investigate the social determinants of health, including claims such as: socioeconomic status causes ill health, and stress causes ill health.  Another student was asking how we should conceive of such questions, and pointing out that we try to investigate such claims by studying how lower-level variables, like housing, education, and so on cause ill-health.  The student argued that we often look at the same or very similar low-level variables to try to understand how these apparently different social factors could cause ill health – and this may well be because we don’t know what the mechanism is. The idea developed by the class was that there are always going to be cases where – when we don’t know – the process of coming to understand mechanisms will be intertwined with the process of deciding what variables to measure when looking for correlations.

On the next day, there was a class of 3rd year students studying philosophy of natural sciences, coming to the class from science and technology studies, physics, medicine, biochemistry, social sciences and philosophy.  We were discussing data science and curation practices, including visualisation techniques that let you assess data. I was struck by a unifying dream here, and in some approaches to medical evidence, of data-driven science, objective and without bias.

The class examined Leonelli’s discussion of the various ways in which embodied knowledge of the target material, technologies and practices of the disciplines studying, for example, Arabidopsis, are essential to getting big interactive, interdisciplinary databases of data on Arabidopsis to work.

Again, students noticed how important our theoretical understanding, alongside practical expertise, is in allowing us to build and use technology, and choose the data points to put into databases – including choosing how to tag data with metadata about its origins, so allowing data to ‘travel’, in Leonelli’s evocative terminology.

So, in a certain sense these two classes were concerned with exactly the same problem: until we have some theoretical and possibly also practical understanding – that comes in medicine from understanding mechanisms – we will not converge even on the basic data-points, the basic variables to measure, to choose to put into our databases or our study design in order even to begin data-driven science.

A final thank-you to my students for a very interesting week!

References

Sabina Leonelli: Volume 22, Issue 3 – Fall 2014 – Special Issue on Simulation, Visualization, and Scientific Understanding: Henk W. De Regt and Wendy S. Parker, Guest Editors

Special issue on evidence and causality published

Phyllis Illari & Federica Russo (eds.): Evidence and causality in the sciences. Special issue of Topoi, Volume 33, Issue 2, was published yesterday. It can be found here.

The collection of paper examines how we gain evidence of causality in various sciences.

Several papers examine evidence in medicine, including Barbara Osimani’s discussion of evidence of unintended effects; Jacob Stegenga’s arguments against the very idea of evidence hierarchies; Margaret MacDougall’s guide to when statistical evidence looks too good to be true, and Clarke, Gillies, Illari, Russo and Williamson’s examination of evidence of mechanism.