Tag Archives: evidence

Red death and reference classes

I’d originally planned to write something this week on the announcement that the Nobel prize in Physiology/Medicine has been awarded to Campbell, Ōmura and Tu. While there’s lots of possible interest here – the Neglected Tropical Disease angle, or the unusual military aspect to be found in the intellectual history of Tu’s work on artemisinin. However, I’ve been distracted by something that came out of S. Lochlann Jain’s excellent new-ish book Malignant: How Cancer Becomes Us, which I’ve been avidly reading this week.

Continue reading

End of the year

It’s almost the last workday of 2014 for me today, and a busy last few weeks means that I’m looking forward to the Christmas break. All things being well, I’ll be able to spend some quality time with two books that I’ve been meaning to read for ages.

Some holiday reading…

Götzsche, Peter. 2013. Deadly Medicines and Organised Crime. Radcliffe.

The first of these is not, I’m afraid, cheerful holiday reading. In fact, if you’d like a brief summary of the most important bit of Gotzsche’s argument, the title of a recent paper – Our prescription drugs kill us in large numbers – tells you most of what you need to know. The book follows on from this: page 1 begins with the quote “drugs are the third leading cause of death after heart disease and cancer.”

This is shocking stuff, which becomes even more troubling because of the lucid way that Götzsche lays out his evidence in support. The mechanisms by which drugs became so dangerous are largely down to the pharmaceutical industry. I’m planning a proper review of the book for some time in the new year, but by way of a taster, Götzsche argues that drug companies are largely free to mislead clinicians about both the safety and efficacy of drugs, and that this impunity has largely come about through a shortfall of regulatory oversight.

This brings me to my second lump (affectionately!) of holiday reading, which deals with the big-daddy of the pharma regulators: the FDA. Götzsche argues that the FDA has become an environment that is completely subordinate to the drug industry. But I think that there might be more to say on the historical side here. Luckily, so did Daniel Carpenter, whose book deals with just that.

Carpenter, Daniel. 2014. Reputation and power: organizational image and pharmaceutical regulation at the FDA. Princeton University Press.

This book came out of the FDA project at Harvard. This seems surprising to me, given that the book itself was published by a different Ivy League press. But no matter what intrigue may have lead to this route of publication, I’m told that is a startlingly thorough and penetrating account of how the FDA came to be. Until I’ve spent much more time with it, though, I’ll hold off trying to summarise 700-odd pages of detailed argument.

A New Year’s resolution

AllTrials logo

Whatever you think of Peter Götzsche’s overall analogies between the pharmaceutical industry and members of the Soprano family, I think that trials conducted in secret have to be of concern. That’s why I’m also making a new-year’s resolution to be more vocal in my support of the AllTrials campaign. You can find out more on their website, but the least that you need to know is that…

AllTrials calls for all past and present clinical trials to be registered and their full methods and summary results reported.

I’m signing their petition, and urge you to sign too, largely because finding things out in medicine is already hard enough without concealed trial results.

New knowledge and what to measure

I have been struck this week about how issues to do with evidence of mechanism have arisen in all my classes, and how students from multiple scientific backgrounds have set about attempting to understand evidence in science in very different ways.

On my masters course, students studying causality were particularly looking at evidential pluralism in health, at the idea that there were many ways in which evidence of correlation even from well-conducted large-scale observational studies or randomised controlled trials could mislead us, and evidence of mechanism could be used as a useful complement.

My masters students come from backgrounds in medicine, biochemistry, geology and social science, and were struck by what a complicated integrated thing evidence of causality is.  Discussion ranged over issues that cross different sciences, before homing in on the question, pertinent to all trials, of what to measure and how to measure it.

We began with looking at historical Medical Research Council trials of streptomycin, and why they chose, during the trials, to study X-rays of patients’ lungs, and also to test bacteria in the treatment group to see whether they were developing resistance to streptomycin over the course of the 4 months of treatment.  It is difficult to see how such choices could be made except in terms of a background theory of the mechanism of action of both disease and cure.  A student was arguing, further, that such choices are always contextual, because there is never going to be a one-size-fits-all story for when *no evidence of mechanism* counts as *evidence of no mechanism* – for when you have looked well enough for a hypothesised mechanism to be sure that it is not there.

Discussion then ranged to more recent attempts to investigate the social determinants of health, including claims such as: socioeconomic status causes ill health, and stress causes ill health.  Another student was asking how we should conceive of such questions, and pointing out that we try to investigate such claims by studying how lower-level variables, like housing, education, and so on cause ill-health.  The student argued that we often look at the same or very similar low-level variables to try to understand how these apparently different social factors could cause ill health – and this may well be because we don’t know what the mechanism is. The idea developed by the class was that there are always going to be cases where – when we don’t know – the process of coming to understand mechanisms will be intertwined with the process of deciding what variables to measure when looking for correlations.

On the next day, there was a class of 3rd year students studying philosophy of natural sciences, coming to the class from science and technology studies, physics, medicine, biochemistry, social sciences and philosophy.  We were discussing data science and curation practices, including visualisation techniques that let you assess data. I was struck by a unifying dream here, and in some approaches to medical evidence, of data-driven science, objective and without bias.

The class examined Leonelli’s discussion of the various ways in which embodied knowledge of the target material, technologies and practices of the disciplines studying, for example, Arabidopsis, are essential to getting big interactive, interdisciplinary databases of data on Arabidopsis to work.

Again, students noticed how important our theoretical understanding, alongside practical expertise, is in allowing us to build and use technology, and choose the data points to put into databases – including choosing how to tag data with metadata about its origins, so allowing data to ‘travel’, in Leonelli’s evocative terminology.

So, in a certain sense these two classes were concerned with exactly the same problem: until we have some theoretical and possibly also practical understanding – that comes in medicine from understanding mechanisms – we will not converge even on the basic data-points, the basic variables to measure, to choose to put into our databases or our study design in order even to begin data-driven science.

A final thank-you to my students for a very interesting week!

References

Sabina Leonelli: Volume 22, Issue 3 – Fall 2014 – Special Issue on Simulation, Visualization, and Scientific Understanding: Henk W. De Regt and Wendy S. Parker, Guest Editors

Not-knowing about the aetiology of cervical cancer: a puzzle about absence of evidence

Brendan Clarke, UCL

b.clarke@ucl.ac.uk


I’m due to give a departmental seminar at HPS in Cambridge on the 27th November on ignorance. The idea – which, as usual, I’m still fleshing out – is to explore a question about evidence in medicine using a mixture of historical and philosophical work that deals, loosely, with questions about ignorance. I’ll get to an outline of this integrated HPS work in a moment, once I’ve outlined the problem which was my starting point for this research.

This is likely to be a familiar query to philosophers (and scientific practitioners): what’s the difference between absence of evidence andevidence of absence? This is a rather well-worn query, and the usual answer is that there is some kind of important difference between them (see Sober 2009; Strevens 2009 for examples). Rather than attempting to add to the stock of answers to this general this problem, though, I wanted to explore a more constrained version that we might encounter when thinking about EBM.

Picture two scenarios where we are looking for evidence to support a claim that a particular drug is an effective treatment for a particular disease. Should we think differently about cases:

a) where there is no evidence to be found because no research has been conducted

and

b) where there is no evidence to be found, despite much research having been conducted

My intuition is strongly that we should think differently about these cases when it comes to considering what we should believe about the possible efficacy of our drug. In case b), we should seriously doubt whether the drug is effective, whereas in case a) we just don’t know whether it is or not. In general, I think that the story about how we happen to not know about a particular hypothesis should constrain our beliefs about that hypothesis, and one important part of this story concerns an account of how hard researchers have looked for it.

However, trying to apply this intuition to practice is not very easy, largely because most unsuccessful trials are not published. Okay, so clinical trials registries offer a possible solution to this difficulty, but this assumes that data can be collected comprehensively. Entirely anecdotally, I’ve had several experiences of finding seriously incomplete data in apparently compulsory clinical trials registries, and plan to discuss this in a future blog post. But for whatever reason, it’s often not possible to figure out just how much unpublished research there has been in particular cases.

This is one reason that I’ve started thinking more historically about this issue. I’ve already talked on this blog about the case of cervical cancer. To briefly recap, cervical cancer is now thought to be caused by infection with human papillomavirus (HPV). However, for the two decades between about 1965 and 1984, cervical cancer was thought to be caused by infection with an unrelated virus known as herpes simplex virus (HSV). From the first suggestion that HSV might cause cervical cancer, its causal role was thought to be highly plausible, largely because of the roles played by herpes viruses in causing cancers in animals. By analogy with these animal tumours, an extensive research programme developed around HSV that was predicated on investigating its (possible) aetiological role in cervical cancer.

What’s interesting here is that this unsuccessful research on HSV produced many publications. My informal estimate on PubMed turned up something like 400 publications on the subject, bounded by the first HSV detection in cervical cancer tissue (Naib et al 1966) to the pair of reports of the Prague study that showed no correlation between cervical cancer and HSV (Vonka et al 1984a and b). Strictly, Prague was not the end, and perhaps because of the length of journal publication cycles, articles supporting the herpes hypothesis continued to appear after this devastatingly negative report was published (see Aurelian 1984, and hope fervently that history isn’t so unkind to us).

Something that is notable about all these publications is that they feature precious few results, at least to the aetiological question regarding HSV and cervical cancer. An excellent example are the many papers found in a special issue of the journal Cancer Research from 1973, which contains the proceedings of a Symposium, sponsored by the American Cancer Society, on Herpesvirus and Cervical Cancer which took place at the Key Biscayne Hotel, Key Biscayne, Florida. Instead, most of these papers are about methods and models (see e.g. Sever 1973 for a typical example, and Klein 1973 and Goodheart 1973 for overviews). Very few of them directly implicate HSV in the genesis of cancer of the cervix. Yet despite this, HSV remained by far the most plausible cause of cervical cancer to cancer-virus researchers at the time.

In the seminar paper next week, my aim is to try and understand this persistent absence of evidence in the context of recent research into agnotology – culturally induced ignorance or doubt. However, the emphasis in much of this work is firmly on ignorance as “something that is made, maintained, and manipulated” (Proctor and Schiebinger 2008:8). Typical cases discussed in the agnotology literature – such as military classification of documents, or the doubt deliberately cast on the causal link between smoking and lung cancer – are characterised by the deliberate obscuring of knowledge by individuals or organisations. This is not so for cervical cancer, in which a persistent absence of knowledge seems to have been ignored, rather than manufactured. Here, I therefore explore a epistemic thesis concerning agnotology: “when should persistent absence of evidence make us think sceptically about a particular hypothesis?”

One historian’s tid-bit: the Key Biscayne Hotel, Key Biscayne, Florida, where the cervical cancer symposium was held, was just a stone’s throw from the “Florida White House“, the waterfront compound where Richard Nixon spent a good deal of time during the December of 1972 as the Watergate scandal began to build. What a time and place to be dealing with secrets, and their dissolution.

Richard M. Nixon, ca. 1935 – 1982 – NARA – 530679” – U.S. National Archives and Records Administration. Licensed under Public domain via Wikimedia Commons

References

Aurelian L. 1984. Herpes simplex virus type 2 and cervical cancer. Clinics in Dermatology. 2(2):90-9.

Goodheart, CR. 1973. Summary of informal discussion on general aspects of herpesviruses. Cancer Research. 33(6): 1417-1418.

Klein, G. 1973. Summary of Papers Delivered at the Conference on Herpesvirus and Cervical Cancer (Key Biscayne, Florida). Cancer Research. 33(6): 1557–63.

McIntyre, P. 2005. Finding the viral link: the story of Harald zur Hausen, Cancer World. July-August 2005: 32-37.

Naib, ZM., Nahmias, AJ. and Josey, WE. 1966. Cytology and Histopathology of Cervical Herpes Simplex Infection. Cancer. 19(7): 1026–31.

Proctor, RN. and Schiebinger, L. 2008. Agnotology: the making and unmaking of ignorance. Stanford University Press.

Sever, JL. 1973. Herpesvirus and Cervical Cancer Studies in Experimental Animals. Cancer Research. 33(6): 1509-10.

Sober, E. 2009. Absence of evidence and evidence of absence: Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads. Philosophical Studies. Philosophical studies. 143(1): 63-90.

Strevens, M. 2009. Objective evidence and absence: comment on Sober. Philosophical studies. 143(1): 91-100.

Vonka V, Kanka J, Jelínek J, Subrt I, Suchánek A, Havránková A, Váchal M, Hirsch I, Domorázková E, Závadová H, et al. Prospective study on the relationship between cervical neoplasia and herpes simplex type-2 virus. I. Epidemiological characteristics. International Journal of Cancer.33(1):49-60.

Vonka V, Kanka J, Hirsch I, Závadová H, Krcmár M, Suchánková A, Rezácová D, Broucek J, Press M, Domorázková E, et al. 1984. Prospective study on the relationship between cervical neoplasia and herpes simplex type-2 virus. II. Herpes simplex type-2 antibody presence in sera taken at enrollment. International Journal of Cancer. 33(1):61-6.

Fun with reference classes

I’m currently working on a paper that I’m presenting at the Philosophy of Science Association biennial meeting in Chicago next month. While I’m making slow progress on the paper, I’ve discovered a couple of examples of practical reasoning about evidence that you might find interesting.

The paper itself deals with a (pretty hoary) philosophical problem known as the reference class problem. Briefly, this describes a difficulty about inferring the probability of individual events by relating those individual events to a group of similar events, or a reference class. That’s definitely in philosophese, so perhaps a nice example would make things clearer. My favourite owes to Connor Cummings (who wrote an excellent BSc dissertation on reference classes), and is about house fires. Say that I want to estimate the probability of my house burning down during the next year. Which statistics should I look at? Well, I could look at those statistics that describe the number of of houses that burn down each year in the UK as a whole. Or, perhaps, I could look at statistics that deal with houses that are built from bricks? Alternatively, perhaps I should look for figures that describe the chances of houses with blue front doors being consumed by flames? Or (even) statistics for brick-built houses in London that have blue front doors?

Each of these statistics are likely to provide different probability estimates. This means that – depending on our choice of reference class – we will come up with very different estimates of the probability of my house burning down next year. This is just the kind of thing that might make an insurance agent very unhappy. Worse, though, is to come: given that each might give different estimates of our individual probability, which should we prefer? None of them are straightforwardly wrong, because they all describe groups that are in some respects similar to my actual house. Assuming that we could generate reliable statistics for each one, choice between them seems to be a matter of subjective preference. In other words, there doesn’t seem to be an objectively correct choice of reference class.

This is the reference class problem, and it has engaged philosophers of science for at least 65 years (Reichenbach 1949). My aim while putting together my PSA paper, though, is not to try and formulate some novel solution to the problem, but instead to talk about some of the solutions to this problem that have been employed in scientific practice. I was very interested to learn that a recent piece of guidance from NICE’s had suggested that different prescription practices should be adopted for hypertension sufferers of different ages, and from different ethnic backgrounds. The usual first-line treatment for high blood pressure in people under 55 would be an ACEI or ARB:

1.6.6 Offer people aged under 55 years step 1 antihypertensive treatment with an angiotensin-converting enzyme (ACE) inhibitor or a low-cost angiotensin-II receptor blocker (ARB)…(NICE 2011: 17)

However, prescription practices should vary because of both age and ethnicity:

1.6.8 Offer step 1 antihypertensive treatment with a calcium-channel blocker (CCB) to people aged over 55 years and to black people of African or Caribbean family origin of any age. If a CCB is not suitable, for example because of oedema or intolerance, or if there is evidence of heart failure or a high risk of heart failure, offer a thiazide-like diuretic. (NICE 2011: 17)

I’d like to suggest that this difference in recommended prescribing practices is some interesting reference class work on the part of NICE. However, it seems hard to align this kind of thinking with the more philosophical approaches to the reference class problem that I know. Here, I’m largely thinking of Salmon’s (1971) suggestion that we should prefer homogeneous reference classes of one kind or another. However, we know that neither age nor ethnicity form homogeneous reference classes. Yet (as far as NICE is concerned) these groups are intended to behave like homogeneous reference classes, in that a) they are intended to give unequivocal guidance as to the reference class membership of an individual and b) in that membership of one of these reference classes changes individual probability estimates. So what are the grounds for this clinical guidance confidently picking out these groups?

While looking for possible solutions to this difficulty, which I’ll have to leave hanging for the time being, I ran into some very interesting work on the reference class problem in the law. That I had no idea that the reference class problem was something that lawyers argued about is probably more an indicator of my ignorance of the law than anything else, but I was surprised to find several different ways of resolving (or, at least, giving ways of arguing about) reference class difficulties in legal practice. One excellent introduction is the paper by Cheng (2009) in the Columbia Law Review. This also contains a brilliant example of the reference class problem as applied to international drug smuggling, which alone is worth reading the paper for. Anyway, the substances of Cheng’s argument is that inference based on reference classes is structurally very similar to regression analysis. This means, I think, that the reference class problem can be regarded as a special case of the model selection problem. In turn, this means that we can employ established techniques, developed to deal with the problem of model selection, to pick between different reference classes in a principled way. While the details of these techniques – the main one discussed in Cheng’s paper is Akaike’s Information Criterion (AIC) – are not something that I’m not terribly familiar with, this approach does appear to offer the advantage of providing practitioners (legal, in this case) the advantage of at least being able to pick between different reference classes in a consistent manner. I wonder if something similar might be developed for the medical context…

References

Cheng, EK. 2009. A Practical Solution to the Reference Class Problem. Columbia Law Review. 109(8): 2081-2105.

Hájek, A. 2007. The reference class problem is your problem too. Synthese, 156(3): 563-585.

NICE (2011). CG127: Hypertension: clinical management of primary hypertension in adults. National Institute for Health and Clinical Excellence, London. Available from: http://www.nice.org.uk/guidance/cg127/resources/guidance-hypertension-pdf

Reichenbach, H. 1949. A Theory of Probability. Berkeley University Press

Salmon, W. 1971. Statistical Explanation. In Salmon, W. (Ed.), Statistical Explanation and Statistical Relevance. University of Pittsburgh Press