Improving service starts with a leap of fact, not faith

An earlier version of this was first posted on October 10th 2014

Leap of Fact

  • What should we improve and why?
  • What has changed?
  • How do we improve things, where … when?
  • Who should we involve?

If we start to address these questions and filter out assumptions and  preconceptions, we are able to make some sensible decisions about how to make effective changes that will have a positive effect on performance.

The world is not perfect and we are unlikely to always have the time and resources to gather the complete picture of what is happening. Nevertheless it is important that we seek out and analyse relevant data in order to make some reasonably robust assumptions about what we can do.

There are two common failures of action, lets call them type 1 and type 2 (which is what statisticians call them), or perhaps a mistake in identification between ‘common causes’ and ‘special causes’ of variation. Without understanding the difference we risk just ‘tampering’, where we feel like we are doing something useful but actually only making things worse (Deming, 1982).

“Common Causes”

Common cause situations are those where performance goes up and down over time and if analysed properly can be seen to occur over a relatively predictable pattern: if we change nothing, the performance level will most likely continue. The problems arise when  someone thinks they see a real difference between points of data when in fact no such thing exists. This a type 1 error: we observe  a change which is really only a natural effect of background ‘noise’ yet we choose to act on that ‘change’. For example someone in the office achieves a great result whilst others do not achieve the same result. Is the difference because of the person, or something else in the wider context? Perhaps, as is often the case, they just got lucky and happened to be the one that achieved the good result. Next week it might be someone else. The analogy  is a fire alarm going off indicating a fire when in fact there is no fire. It is easy to fall into type 1 errors assuming highs and lows of performance which don’t exist. This is a ‘mistake of commission’  – doing something that should not have been done (Ackoff et al 2006).

“Special Causes”

Some special causes are obvious, for example a major increase or decrease in performance or a freak accident. However, sometimes hidden patterns of performance can indicate a real change which might easily go undetected if we consider each data point as a ‘one off’. This is a bit like a fire breaking out but the fire alarm not ringing. The fundamental problem is that these genuine changes are due to ‘Special Causes’ something real which is impinging on the system. The issue here is that the solution sits outside the system – don’t redesign what you have as it will not replicate the situation – that is just meddling and will make things worse. For example, cycles of deteriorating work output followed by improving work output by one person might indicate an underlying special cause which needs to be addressed (health for example), so meddling with the design of the work in itself would be counterproductive. Furthermore if the manager does not look at performance over time, these cycles might not be detected anyway – on average they might look like a reasonable level of output. Ackoff calls this a mistake of omission – not doing something that should have been done.

Of course to detect differences between special cause and common cause varuiations in performance requires new skills and disciplines of thinking. When you understand the organisation as a system, improving service starts with a leap of fact, not faith.

Reading:

Ackoff, R.L.; Addison, H. J. Bibb, S. (2006) Management f-Laws: How Organizations Really Work. Triarchy Press

Deming W.E. (1982) Out of the Crisis, MIT CAES, Cambridge MA.

Seddon, J. (2005) Freedom from Command and Control, Vanguard Press, Buckingham, UK.