Improvement starts with a leap of fact, not faith

Simon Black – 

  • What should we improve and why?
  • What has changed?
  • How do we improve things, where … when?
  • Who should we involve?

If we start to address these questions and filter out assumptions and  preconceptions, we are able to make some sensible decisions about how to make effective changes that will have a positive effect on performance.

The world is not perfect and we are unlikely to always have the time and resources to gather the complete picture of what is happening. Nevertheless it is important that we seek out and analyse relevant data in order to make some reasonably robust assumptions about what we can do.

There are two common failures of action, lets call them type 1 and type 2 (which is what statisticians call them). Another definition would be a mistake in identification between ‘common causes’ and ‘special causes’ of variation; without understanding the difference we risk just ‘tampering’. What we want to avoid is the delusion that feel like we are doing something useful but actually only making things worse (Deming, 1982).

“Common Causes”

Common cause situations are those where performance goes up and down over time and if analysed properly can be seen to occur over a relatively predictable pattern: if we change nothing, the performance level will most likely continue. The problems arise when  someone thinks they see a real difference between points of data when in fact no such thing exists. This a type 1 error: we observe  a change which is really only a natural effect of background ‘noise’ yet we choose to act on that ‘change’. For example someone in the team achieves a great result whilst others do not achieve the same result. Is the difference because of the person, or something else in the wider context? Perhaps, as is often the case, they just got lucky and happened to be the one that achieved the good result. Next week it might be someone else. The analogy  is a fire alarm going off indicating a fire when in fact there is no fire. It is easy to fall into type 1 errors assuming highs and lows of performance which don’t exist. This is a ‘mistake of commission’  – doing something that should not have been done (Ackoff et al 2006).

“Special Causes”

Some special causes are obvious, for example a major increase or decrease in performance or a freak accident. However, sometimes hidden patterns of performance can indicate a real change which might easily go undetected if we consider each data point as a ‘one off’. This is a bit like a fire breaking out but the fire alarm not ringing. The fundamental problem is that these genuine changes are due to ‘Special Causes’ something real which is impinging on the system. The issue here is that the solution sits outside the system – don’t redesign what you have as it will not replicate the situation – that is just meddling and will make things worse. For example, cycles of deteriorating work output followed by improving work output by one person might indicate an underlying special cause which needs to be addressed (health for example), so meddling with the design of the work in itself would be counterproductive. Furthermore if the manager does not look at performance over time, these cycles might not be detected anyway – on average they might look like a reasonable level of output. Ackoff calls this a mistake of omission –not doing something that should have been done.

An example can been encountered in Human Wildlife Conflict. A ‘rogue’ animal may change its behaviour due to injury or illness and preferentially predate livestock for a period of time. If a decision is made to destroy the animal (or relocate it to a more remote area) should the same policy be applied to any animal which predates livestock? For the one-off animal a one-off intervention might succeed, but if it were to be repeated for every animal it would certainly be costly (relocation) and might make things worse (e.g. if destroying every animal).  Clearly identifying whether the rogue animal is an ‘exception’ or a ‘common cause’ is important.

Of course to detect differences between special cause and common cause varuiations in performance requires new skills and disciplines of thinking. When you understand the organisation as a system, improving service starts with a leap of fact, not faith.

Reading:

Ackoff, R.L.; Addison, H. J. Bibb, S. (2006) Management f-Laws: How Organizations Really Work. Triarchy Press

Deming W.E. (1982) Out of the Crisis, MIT CAES, Cambridge MA.

Seddon, J. (2005) Freedom from Command and Control, Vanguard Press, Buckingham, UK.

Never forget this fact: There is no such thing as factual information

Simon Black –

This blog title is provocatively paradoxical. The assumption is that somthething measured is something proved. this is a habit of thinking which we are trained to establish in our minds as scientists.

This is not the case.

In practice, when we decide to define a fact, we then define what it is, how it is to be measured, then measure to verify.

In deciding the measurement, we simply place a judgement – our opinion of reality, onto something that isn’t there. For example:

The label on a blanket reads “50 per cent wool” What does this mean? Half wool, on the average, over this blanket, or half wool over a month’s production? What is half wool? Half by weight? If so, at what humidity? By what method of chemical analysis? How many analyses? The bottom half of the blanket is wool and the top half is something else. Is it 50 per cent wool? Does 50 per cent wool mean that there must be some wool in any random cross-section the size of a half dollar? If so, how many cuts shall be tested? How select them? What criterion must the average satisfy? And how much variation between cuts is permissible? Obviously, the meaning of 50 per cent wool can only be stated in statistical terms (Deming 1975).

Is it now becoming clear?

“Without theory (hypothesis), data are meangingless or nonexistent. There is thus no true value of anything: true value is undefinable operationally. There are, however, numerical values that people can use with confidence if they understand their meaning (for the tensile strength of a batch of wire, for example, or for the proportion of the labor force unemployed last month).” (Deming 1967).

The trick is to understand the meaning of numbers. this is clearly important if we are conudcting a population census (which individuals, where, within what boundaries, at what point in time, by what method of observation, how to record etc.) buit more so when we consider more nebulous things, like the ‘perceptions of local communities’, or ‘support for conservation action’ or the ‘involvement of local partners’.

Not everything that can be counted counts.
Not everything that counts can be counted.

 So the first useful question about somethnig is:

“what do we know about this?”

Think about this next time you set a goal, or measure results…

 Further Reading:

Deming W.E. (1967) Walter A. Shewhart, 1891-1967. The American Statistician, 21(2): 39-40

Deming (1974) On probability as a basis for action. The American Statistician, 29 (4): 146-152