I was recently sitting in a staff meeting for one of our factories and listening to them review their maintenance scorecard. The plant has a set of KPI's and metrics they review weekly to gauge how their reliability program was working. About half way through the meeting, a manager followed up excitedly when the number of work orders completed the previous week was reported. I sat back any listened to the discussion on why the number of work orders completed the previous week was so impressive. The discussion centered on the number completed and didn't relate the work order count in any other context. I let the review continue and decided to follow up with the manager later on.
Later in the day I followed up with the manager who was so excited about the work order count. I asked why this number seemed to be so important over the other metrics discussed. The manager replied, "But look at how much work we got done. This has to be one of our best weeks in a long time!" I agreed. The plant hadn't done that many work orders for some time, but followed up again and asked what that meant to the department. Did the number of work orders completed indicate anything to the maintenance department about the work they would complete this week or how much work they had to get done next week? He stared blankly at the scorecard and wasn't sure what I meant. So we sat down in the break room and started going over the scorecard in more detail.
We started to review the scorecard and I asked again what the number of work orders completed meant. The manager was worried he was in trouble or that he'd made a spectacle of himself in the meeting. I tried to assure him that wasn't the case. What I wanted to do was explain that the single statistic he had pointed out didn't really tell us anything. I asked if the department got every single work order it was supposed to get done that week, but the plant was only able to run 80% of the time, should we be excited about what we got done? He replied, 'No', because the plant still wouldn't have been running very well. I agreed and then walked through how the scorecard had been originally setup and how the metrics were supposed to be used to tell us how our reliability program was doing.
I went over the three most important things I've learned about KPI's and metrics:
- All reporting is about being more effective in the future: Metrics are not about understanding what got done, but what can we learn as an indication of how we are going to do in the future. We cannot change what happened yesterday, but understanding how we performed yesterday can make us more effective and improve how we perform tomorrow.
- Telling me how much you got done is not a metric, it's a statistic: A lot of times a manager will sit in a staff meeting and state we got 'x' work orders completed last week, like in the case of the manager at the staff meeting above. But leaving the fact that your department got 'x' work orders done is a statistic, not a metric. Reliability professionals want to use multiple statistics as an indicator of something bigger, that's when we turn it into a useful metric.
- All process changes need to be measured: Whenever a change is made to a process, in an effort to make it more effective and/or efficient1, there needs to be a metric put in place to gauge on how well the change is working. Making changes, without any feedback, is worse than making no changes at all.
The most important point I raised with the manager was that we shouldn't be using metrics to point out what was done wrong, but use the information to adjust our behaviors and processes to improve our reliability program going forward. Metrics are not about reporting about what happened, it's about making more effective decisions about the future.
Because 'Effective' and 'Efficient' are not the same. ↩