Has the tide turned? A new culture for Responsible Metrics is on the horizon

Guest blog: Katrine Sundsbø, Scholarly Communications and Research Support Manager, University of Essex

Katrine Sundsbø reflects on the UK Forum for Responsible Metrics event, held on the 7th February 2018.

The topic ‘responsible metrics’ has gone from hot to boiling after RCUK signed DORA (San Francisco Declaration on Research Assessment) Wednesday 7th February. This means that they, as a funding organisation, are committing to good practice in regards to the use of metrics in research assessment. The timing of the event by UK Forum for Responsible Metrics on Thursday 8th February could therefore not have been better.

Sarah Slowe recently wrote a piece on responsible metrics, and as the discussion around responsible metrics is going to have to involve collaboration between institutions it was well timed that I was invited to write a guest blog about the event. Here is my (subjective) view of the key points from “The Turning tide: A new culture of responsible metrics for research”.

Should all organisations sign DORA?

Though it definitely is a good thing that RCUK has signed DORA, it became apparent from the event that signing DORA is easy; the real work comes after signing. This is perhaps why the majority of the 96 organisations who responded to the survey on Culture of Research Metrics in Research Organisation have not signed DORA. Most of those organisations that have signed the declaration have not taken any action towards responsible metrics. So what is the point of signing?

Some institutions might not feel ready to implement responsible metrics, and therefore do not want to sign. This became apparent in one of the discussions I was in during the breakout sessions; having only just started to look at metrics some institutions worried that embracing responsible metrics now would be like running before learning to walk. On the other hand, Adam Tickell mentioned in the closing remarks that we should embrace the guidelines first and then implement action. As an example he mentioned the change in seatbelt use; the rule had to come first. This is probably why a lot of good practice in academia is only implemented after funders make them requirements (for example Open Access). After all, we are humans and we will not change unless we have to. Signing the DORA is therefore a good starting point, as long as we acknowledge that it is only that; A starting point.

A researcher’s perspective

The most intense part of the day was when researchers were sharing their point of view on metrics. It seems like often researchers are not included in debates about research assessment – especially regarding the use of metrics. But they all have to comply with it and it is expected that they publish “a lot” (whatever that means). Researchers are also expected to publish in high impact journals. This can be difficult if you work in a subject area where all journals have low impact factor or if you are an Early Career Researcher. Further, should we even use journal impact factors? Managing to publish in a four star journal does not mean that the publication itself suddenly becomes a four star article. This something that came up throughout the event: Research must be assessed individually and not by crude metrics like the journal impact factor (or at least not by metrics alone).

Responsible metrics or responsible indicators?

As the discussion of the event went from differences in subject areas to the many ways of measuring research the word ‘metrics’ became rejected. I do agree that we need to change our approach to the way research is assessed, but will rejecting a well-established word do that or is it just confusing? I have just gotten used to the term ‘metrics’ and as I mentioned earlier: people do not like change. Plus, ‘responsible indicators’ is a lot harder to say. However, I do agree with where this call to change of terminology is coming from. Research assessment should not be based on quantitative measures alone; it should also come from qualitative measures. I imagine this is something that will be welcomed by many, especially as this will give academics in Humanities and Social Sciences a fairer assessment of their work. But how do you create such a measure and make it fit all? Perhaps we shouldn’t try to make it fit. The one-size-fits-all never work with clothes, so why do we expect it to work when we compare completely different institutions/researchers/departments/journals (take your pick).

What happens now?

I think many good solutions were suggested during the event. Lizzie Gadd had many great ideas that I think many institutions can adapt, for example:

  • Creating frameworks as guidance on responsible metrics
  • Creating ‘Champions’
  • Train people who use metrics

We also need to share good practice with each other and make sure we are embracing a transparent movement. Metrics can be extremely useful but we need to address limitations and biases. There are way too many examples of bad practice when it comes to metrics, it is about time there is a collective movement towards good practice.

Leave a Reply