Category Archives: Points of view

Why words matter. Some thoughts on cancer and #badluck

An article recently appeared in Science investigates the correlation between the frequency of tumours (all, indistinctively) and the number of stem cell divisions. The correlation, the authors of the article say, is very strong. This is potentially very significant and informative, except that, unfortunately, only a very small proportion of the variation in cancer risks can be attributed to environmental factors or inherited predisposition. The rest – which is a big chunk to explain – is due to bad luck. I refer the reader to a clear article by Paolo Vineis, where he expresses doubts about the methodology of the paper.

But this is not the point I want to reflect on. I am instead most interested in the explicit use of the term ‘bad luck’, which occurs both in the text of the authors and in the editor’s comment accompanying the article. Needless to say, this is the only aspect that the media picked up (just google ‘bad luck cancer’ and you will realise how much this is resonating).

There is a lot that we don’t know about cancer (and about many other phenomena). But is it ‘bad luck’ if we get ill and we don’t have a cause to blame? Is everything we are unable to explain today due to chance? What kind of message do we convey to the public in this way? The issue is of course delicate, because we find – at the other end – another extreme position: scientism. And surely we don’t want to fall on that side either. While it is intellectually honest to say that we don’t know something, it is misleading to call it bad luck. In the case of cancer, the message can be devastating. If bad luck plays a major role in cancer development, why bothering with prevention and with screening? (This doesn’t imply, however, that all preventive actions are sound or effective, or that screening programmes always lead to correct identification of cases of cancer.)

Also, what the article labels as ‘bad luck’ may be in fact due to factors that have not been thoroughly explored, for instance socio-economic or behavioural factors. I’m not talking about reducing the ‘social’ part of disease to some allegedly objective, bio-chemical element. I’m really talking about understanding the lifeworld of individuals, which includes health and disease.

In sum, ‘bad luck’ is really a bad choice for a scientific article, not just because it shows epistemological clumsiness, but also because it may have quite undesirable consequences on the public understanding of science and on people’s behaviour.

End of the year

It’s almost the last workday of 2014 for me today, and a busy last few weeks means that I’m looking forward to the Christmas break. All things being well, I’ll be able to spend some quality time with two books that I’ve been meaning to read for ages.

Some holiday reading…

Götzsche, Peter. 2013. Deadly Medicines and Organised Crime. Radcliffe.

The first of these is not, I’m afraid, cheerful holiday reading. In fact, if you’d like a brief summary of the most important bit of Gotzsche’s argument, the title of a recent paper – Our prescription drugs kill us in large numbers – tells you most of what you need to know. The book follows on from this: page 1 begins with the quote “drugs are the third leading cause of death after heart disease and cancer.”

This is shocking stuff, which becomes even more troubling because of the lucid way that Götzsche lays out his evidence in support. The mechanisms by which drugs became so dangerous are largely down to the pharmaceutical industry. I’m planning a proper review of the book for some time in the new year, but by way of a taster, Götzsche argues that drug companies are largely free to mislead clinicians about both the safety and efficacy of drugs, and that this impunity has largely come about through a shortfall of regulatory oversight.

This brings me to my second lump (affectionately!) of holiday reading, which deals with the big-daddy of the pharma regulators: the FDA. Götzsche argues that the FDA has become an environment that is completely subordinate to the drug industry. But I think that there might be more to say on the historical side here. Luckily, so did Daniel Carpenter, whose book deals with just that.

Carpenter, Daniel. 2014. Reputation and power: organizational image and pharmaceutical regulation at the FDA. Princeton University Press.

This book came out of the FDA project at Harvard. This seems surprising to me, given that the book itself was published by a different Ivy League press. But no matter what intrigue may have lead to this route of publication, I’m told that is a startlingly thorough and penetrating account of how the FDA came to be. Until I’ve spent much more time with it, though, I’ll hold off trying to summarise 700-odd pages of detailed argument.

A New Year’s resolution

AllTrials logo

Whatever you think of Peter Götzsche’s overall analogies between the pharmaceutical industry and members of the Soprano family, I think that trials conducted in secret have to be of concern. That’s why I’m also making a new-year’s resolution to be more vocal in my support of the AllTrials campaign. You can find out more on their website, but the least that you need to know is that…

AllTrials calls for all past and present clinical trials to be registered and their full methods and summary results reported.

I’m signing their petition, and urge you to sign too, largely because finding things out in medicine is already hard enough without concealed trial results.

New knowledge and what to measure

I have been struck this week about how issues to do with evidence of mechanism have arisen in all my classes, and how students from multiple scientific backgrounds have set about attempting to understand evidence in science in very different ways.

On my masters course, students studying causality were particularly looking at evidential pluralism in health, at the idea that there were many ways in which evidence of correlation even from well-conducted large-scale observational studies or randomised controlled trials could mislead us, and evidence of mechanism could be used as a useful complement.

My masters students come from backgrounds in medicine, biochemistry, geology and social science, and were struck by what a complicated integrated thing evidence of causality is.  Discussion ranged over issues that cross different sciences, before homing in on the question, pertinent to all trials, of what to measure and how to measure it.

We began with looking at historical Medical Research Council trials of streptomycin, and why they chose, during the trials, to study X-rays of patients’ lungs, and also to test bacteria in the treatment group to see whether they were developing resistance to streptomycin over the course of the 4 months of treatment.  It is difficult to see how such choices could be made except in terms of a background theory of the mechanism of action of both disease and cure.  A student was arguing, further, that such choices are always contextual, because there is never going to be a one-size-fits-all story for when *no evidence of mechanism* counts as *evidence of no mechanism* – for when you have looked well enough for a hypothesised mechanism to be sure that it is not there.

Discussion then ranged to more recent attempts to investigate the social determinants of health, including claims such as: socioeconomic status causes ill health, and stress causes ill health.  Another student was asking how we should conceive of such questions, and pointing out that we try to investigate such claims by studying how lower-level variables, like housing, education, and so on cause ill-health.  The student argued that we often look at the same or very similar low-level variables to try to understand how these apparently different social factors could cause ill health – and this may well be because we don’t know what the mechanism is. The idea developed by the class was that there are always going to be cases where – when we don’t know – the process of coming to understand mechanisms will be intertwined with the process of deciding what variables to measure when looking for correlations.

On the next day, there was a class of 3rd year students studying philosophy of natural sciences, coming to the class from science and technology studies, physics, medicine, biochemistry, social sciences and philosophy.  We were discussing data science and curation practices, including visualisation techniques that let you assess data. I was struck by a unifying dream here, and in some approaches to medical evidence, of data-driven science, objective and without bias.

The class examined Leonelli’s discussion of the various ways in which embodied knowledge of the target material, technologies and practices of the disciplines studying, for example, Arabidopsis, are essential to getting big interactive, interdisciplinary databases of data on Arabidopsis to work.

Again, students noticed how important our theoretical understanding, alongside practical expertise, is in allowing us to build and use technology, and choose the data points to put into databases – including choosing how to tag data with metadata about its origins, so allowing data to ‘travel’, in Leonelli’s evocative terminology.

So, in a certain sense these two classes were concerned with exactly the same problem: until we have some theoretical and possibly also practical understanding – that comes in medicine from understanding mechanisms – we will not converge even on the basic data-points, the basic variables to measure, to choose to put into our databases or our study design in order even to begin data-driven science.

A final thank-you to my students for a very interesting week!

References

Sabina Leonelli: Volume 22, Issue 3 – Fall 2014 – Special Issue on Simulation, Visualization, and Scientific Understanding: Henk W. De Regt and Wendy S. Parker, Guest Editors

#interdisciplinarity at the service of #ebm+

At the beginning of November, large part of the philosophy of science community convened in Chicago for the bi-annual meeting of the Philosophy of Science Association. This is usually combined with the annual meeting of the History of Science Society. This year, the programme figured two sessions on interdisciplinarity, one on the history and the other on the philosophy of interdisciplinarity organised by Hanne Andersen.

I contributed to the philosophy of interdisciplinarity with a paper on the social causes of diseases, which I co-authored with Mike Kelly and Rachel Kelly. The paper provides an argument for a mixed aetiology of diseases, especially non-communicable diseases (or, as it was suggested during Q&A, socially communicable diseases). We argue that recent attempts to include socio-economic-behavioural factors do not go far enough, as the explanatory import of these factors is not properly or sufficiently spelled out. If you are interested in the paper, look at the slides, or email me for the latest draft.

But there is another aspect that I wish to focus on in this post. What can ‘interdisciplinarity’ contribute – if anything – to ebm+? A terminological note first. Without too much sophistication*, I take interdisciplinarity to be the collaboration of scholars active in different fields in order to shed light on a shared problem. In this case, I am active in the area of philosophy of science, Mike Kelly has a sociological background and is active in public health, and Rachel Kelly works in molecular epidemiology and has competences in epigenetics. Our shared problem is offering an account of the mixed aetiology of diseases, one that integrates the perspectives coming from sociology (via the concept of ‘lifeworld’) and bio-medicine, and that ‘exploits’ conceptualisations of mechanisms, evidence, or causality developed in philosophy of science.

In this sense, interdisciplinarity is, I think, at the service of ebm+, where the plus precisely indicates what we might add to available eb-approaches in order to improve our understanding of health and disease or the design policies to reduce the burden of disease. The article mentioned in this post is but one example of how to work in an ‘ebm+ perspective’.

 

* Interdisciplinarity studies is a rich and complex area of research. The kind of interdisciplinarity alluded here is just one aspect of the many possible facets.

Not-knowing about the aetiology of cervical cancer: a puzzle about absence of evidence

Brendan Clarke, UCL

b.clarke@ucl.ac.uk


I’m due to give a departmental seminar at HPS in Cambridge on the 27th November on ignorance. The idea – which, as usual, I’m still fleshing out – is to explore a question about evidence in medicine using a mixture of historical and philosophical work that deals, loosely, with questions about ignorance. I’ll get to an outline of this integrated HPS work in a moment, once I’ve outlined the problem which was my starting point for this research.

This is likely to be a familiar query to philosophers (and scientific practitioners): what’s the difference between absence of evidence andevidence of absence? This is a rather well-worn query, and the usual answer is that there is some kind of important difference between them (see Sober 2009; Strevens 2009 for examples). Rather than attempting to add to the stock of answers to this general this problem, though, I wanted to explore a more constrained version that we might encounter when thinking about EBM.

Picture two scenarios where we are looking for evidence to support a claim that a particular drug is an effective treatment for a particular disease. Should we think differently about cases:

a) where there is no evidence to be found because no research has been conducted

and

b) where there is no evidence to be found, despite much research having been conducted

My intuition is strongly that we should think differently about these cases when it comes to considering what we should believe about the possible efficacy of our drug. In case b), we should seriously doubt whether the drug is effective, whereas in case a) we just don’t know whether it is or not. In general, I think that the story about how we happen to not know about a particular hypothesis should constrain our beliefs about that hypothesis, and one important part of this story concerns an account of how hard researchers have looked for it.

However, trying to apply this intuition to practice is not very easy, largely because most unsuccessful trials are not published. Okay, so clinical trials registries offer a possible solution to this difficulty, but this assumes that data can be collected comprehensively. Entirely anecdotally, I’ve had several experiences of finding seriously incomplete data in apparently compulsory clinical trials registries, and plan to discuss this in a future blog post. But for whatever reason, it’s often not possible to figure out just how much unpublished research there has been in particular cases.

This is one reason that I’ve started thinking more historically about this issue. I’ve already talked on this blog about the case of cervical cancer. To briefly recap, cervical cancer is now thought to be caused by infection with human papillomavirus (HPV). However, for the two decades between about 1965 and 1984, cervical cancer was thought to be caused by infection with an unrelated virus known as herpes simplex virus (HSV). From the first suggestion that HSV might cause cervical cancer, its causal role was thought to be highly plausible, largely because of the roles played by herpes viruses in causing cancers in animals. By analogy with these animal tumours, an extensive research programme developed around HSV that was predicated on investigating its (possible) aetiological role in cervical cancer.

What’s interesting here is that this unsuccessful research on HSV produced many publications. My informal estimate on PubMed turned up something like 400 publications on the subject, bounded by the first HSV detection in cervical cancer tissue (Naib et al 1966) to the pair of reports of the Prague study that showed no correlation between cervical cancer and HSV (Vonka et al 1984a and b). Strictly, Prague was not the end, and perhaps because of the length of journal publication cycles, articles supporting the herpes hypothesis continued to appear after this devastatingly negative report was published (see Aurelian 1984, and hope fervently that history isn’t so unkind to us).

Something that is notable about all these publications is that they feature precious few results, at least to the aetiological question regarding HSV and cervical cancer. An excellent example are the many papers found in a special issue of the journal Cancer Research from 1973, which contains the proceedings of a Symposium, sponsored by the American Cancer Society, on Herpesvirus and Cervical Cancer which took place at the Key Biscayne Hotel, Key Biscayne, Florida. Instead, most of these papers are about methods and models (see e.g. Sever 1973 for a typical example, and Klein 1973 and Goodheart 1973 for overviews). Very few of them directly implicate HSV in the genesis of cancer of the cervix. Yet despite this, HSV remained by far the most plausible cause of cervical cancer to cancer-virus researchers at the time.

In the seminar paper next week, my aim is to try and understand this persistent absence of evidence in the context of recent research into agnotology – culturally induced ignorance or doubt. However, the emphasis in much of this work is firmly on ignorance as “something that is made, maintained, and manipulated” (Proctor and Schiebinger 2008:8). Typical cases discussed in the agnotology literature – such as military classification of documents, or the doubt deliberately cast on the causal link between smoking and lung cancer – are characterised by the deliberate obscuring of knowledge by individuals or organisations. This is not so for cervical cancer, in which a persistent absence of knowledge seems to have been ignored, rather than manufactured. Here, I therefore explore a epistemic thesis concerning agnotology: “when should persistent absence of evidence make us think sceptically about a particular hypothesis?”

One historian’s tid-bit: the Key Biscayne Hotel, Key Biscayne, Florida, where the cervical cancer symposium was held, was just a stone’s throw from the “Florida White House“, the waterfront compound where Richard Nixon spent a good deal of time during the December of 1972 as the Watergate scandal began to build. What a time and place to be dealing with secrets, and their dissolution.

Richard M. Nixon, ca. 1935 – 1982 – NARA – 530679” – U.S. National Archives and Records Administration. Licensed under Public domain via Wikimedia Commons

References

Aurelian L. 1984. Herpes simplex virus type 2 and cervical cancer. Clinics in Dermatology. 2(2):90-9.

Goodheart, CR. 1973. Summary of informal discussion on general aspects of herpesviruses. Cancer Research. 33(6): 1417-1418.

Klein, G. 1973. Summary of Papers Delivered at the Conference on Herpesvirus and Cervical Cancer (Key Biscayne, Florida). Cancer Research. 33(6): 1557–63.

McIntyre, P. 2005. Finding the viral link: the story of Harald zur Hausen, Cancer World. July-August 2005: 32-37.

Naib, ZM., Nahmias, AJ. and Josey, WE. 1966. Cytology and Histopathology of Cervical Herpes Simplex Infection. Cancer. 19(7): 1026–31.

Proctor, RN. and Schiebinger, L. 2008. Agnotology: the making and unmaking of ignorance. Stanford University Press.

Sever, JL. 1973. Herpesvirus and Cervical Cancer Studies in Experimental Animals. Cancer Research. 33(6): 1509-10.

Sober, E. 2009. Absence of evidence and evidence of absence: Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads. Philosophical Studies. Philosophical studies. 143(1): 63-90.

Strevens, M. 2009. Objective evidence and absence: comment on Sober. Philosophical studies. 143(1): 91-100.

Vonka V, Kanka J, Jelínek J, Subrt I, Suchánek A, Havránková A, Váchal M, Hirsch I, Domorázková E, Závadová H, et al. Prospective study on the relationship between cervical neoplasia and herpes simplex type-2 virus. I. Epidemiological characteristics. International Journal of Cancer.33(1):49-60.

Vonka V, Kanka J, Hirsch I, Závadová H, Krcmár M, Suchánková A, Rezácová D, Broucek J, Press M, Domorázková E, et al. 1984. Prospective study on the relationship between cervical neoplasia and herpes simplex type-2 virus. II. Herpes simplex type-2 antibody presence in sera taken at enrollment. International Journal of Cancer. 33(1):61-6.