Blog

Optimizing the Cost of Quality

About three months ago I wrote a post called The Two Most Poisonous BPR Dangers, in which I talked about why you shouldn’t necessarily design processes with the “lowest common denominator” in mind.

My argument was that if you demand perfection in the output of your processes, you’ll pay for it in terms of (amongst other things) unnecessary checks and rework embedded within the process itself.

image

Recently I’ve been taking a project management course at Mount Royal University here in Calgary, and one of my classes featured a little nugget of wisdom that was initially very surprising to me, and then immediately familiar. In short, it was something that should have been obvious to me all along. Nevertheless, it wasn’t something that I realized until somebody explicitly broke it down for me, and in case you’re in the same boat, allow me to pass on the favour to you.

Can you ever have too much quality in the output of your process? Yes you can, and I’m about to use actual math to explain why. Brace yourselves, and join me after the break.

Prevention vs. Cure

Conventional wisdom will tell you that the cost of preventing mistakes is lower than the cost of correcting them after the fact. Our goal in this story is lower costs, so extending that logic out would suggest that the way we optimize costs would be to demand perfection from our processes, right? Prevention is better than cure, so with 100% prevention of defects we don’t have to do any of that expensive fixing of things after the fact.

Actually though, this is not the case – it depends on how you frame things. Let’s imagine our process is a manufacturing one, and we’re producing widgets. The cost of making a single good widget and delivering it to our customer is certainly lower (and probably significantly so) than the cost of inadvertently producing a bad widget that gets shipped. If that unhappy process path is followed then now our customer service team needs to be engaged, the bad widget needs to be shipped back, and a replacement good widget sent out. All that stuff is expensive. Just think how much money we’d save if we didn’t have to go through all that rigmarole.

Here’s why this is too much of a simplification, though: we’re not making a single widget here, we’re making thousands, maybe millions of them. We can’t view our process only in terms of producing that single widget, we have to see this process how it is: something that’s repeatable time and time again, with each execution of it featuring some probability of producing a defect.

The Cost of Preventing Mistakes

Broadly speaking, the cost of preventing mistakes looks like this:

image

As we move closer to 100% quality in our process, costs start to rise exponentially.

This makes intuitive sense. Let’s say our widget producing processes are resulting in a happy outcome just 25% of the time. That’s very bad, but we can be optimistic! There’s at least much we can do about it – we could improve the machinery we use, we could add an inspector to pick out the bad ones and prevent them from reaching our customers, in fact almost any adjustment would be better than maintaining the status quo. Essentially there’s lots of low hanging fruit and we can likely make substantial improvements here without too much investment.

Now let’s say our production processes result in a happy outcome 99.9% of the time. We’re probably achieving this because we already have good machinery and an inspection team. How do we improve? Do we add another inspection team to check the work of the first? And a third to check the work of the second? Perhaps we engage the world’s top geniuses in the field of widgetry to help us design the best bespoke widget machinery imaginable? Whatever we do is going to be difficult and expensive, and once it’s done how much better are we really going to get? 99.95% quality?

This is the argument I was making in my previous post. Would further improvement here be worth it from a cost/benefit standpoint? It’s highly doubtful, but we’ll get to the answer soon!

The Cost of Fixing Mistakes

The cost of fixing mistakes looks a lot like this:

image

This makes intuitive sense too. As we move closer to 100% quality, the cost of fixing mistakes shrinks to nothing – because at that point there are no mistakes to fix.

Down at the other end of the chart on the left, we have problems. Back in our widget factory with a happy outcome 25% of the time, three quarters of our customers, upon receiving their widget, are finding it to be defective. They’re calling our customer service team, and asking for a replacement. Those guys are tired and overworked, and even once they’ve arranged for the defective widget to be returned there’s still a 75% chance that the replacement will be defective too and they’ll be getting another call to start the whole thing over.

Finding the Optimal Balance

In our version of the widget factory with a 99.9% happy outcome, things would seem pretty rosy for the business. There’s barely any demand on our customer service folks. We probably have just the one guy manning our 1-800 number. Let’s call him Brad. Brad’s work life is pretty sweet. He probably spends most of his day watching cat videos on YouTube while he waits around for the phone to ring.

If we were to spend the (lots of) money needed to increase quality to 99.95%, we’d probably still need Brad. He’d just get spend even more of his time being unproductive. We’d save some money on shipping replacement widgets, but really there’s very little payoff to the higher quality we’re achieving. We’ve managed to get ourselves into a position where conventional wisdom is flipped on its head: the cost of fixing a problem is less than the cost of preventing one.

This same construct applies to any process, not just manufacturing. So where, as business process engineers, do we find the balance? Math fans have already figured it out.

image

Cumulatively, costs are at their lowest where the two lines meet – where the money we spend on preventing mistakes is equal to the money we spend fixing them. For me this was the thing that was initially surprising, but should have been obvious all along. It seems so simple now, right?

Well, it isn’t. Determining where that meeting point actually falls is no easy task, but nevertheless the moral here is that as much as we all want to strive for perfection it actually doesn’t make business sense to achieve it! We should keep striving, because in reality the meeting point of the two lines on the chart is further to the right than the simplistic graph above might suggest – prevention is, as everybody knows, less expensive than cure and conventional wisdom exists for a reason – but if our strive for perfection comes at the exclusion of all else, if we refuse to accept any defects at all? Then we’re doing it wrong.

Blog

The Two Most Poisonous BPR Dangers

A little while ago when I wrote about The Monkey Parable, I promised that I’d write more about what I see as being the two most poisonous BPR dangers you’ll face when attempting to re-engineer and optimize business process, at least from my experience of attempting to do so.

image

Well friends, the time has come. Join me, won’t you, after the break.

“This is How We’ve Always Done It”

The first danger I alluded to in a not especially transparent way when I relayed the monkey parable to you all: we do it this way because this is the way we’ve always done it.

This used to be an extremely prevalent problem in my organization. Less so these days thanks to a senior leadership team that specifically calls this thinking out and tackles it head on – but even so this kind of thinking still exists where I work, and I’m 99% confident in saying it exists where you work too.

Essentially the thinking here is that we’ve been doing something the same way for 20 years then there has to be an excellent reason for that, otherwise something would have changed long ago. The fact that nobody knows the excellent reason is seen as largely irrelevant.

So from a Business Process Management / Business Analysis perspective, how do you combat it? It’s all about perspective. It’s unlikely that any of the truly important processes in your organization are completed by a single person, and that makes perfect sense: somebody who shapes sheet metal to create a panel for a vehicle would be an extremely skilled craftsman, but that doesn’t mean they have the expertise to build a gearbox. Too often though we have too narrow a focus when we look at business processes.

Focus is only good if you know you’re focussing on the right thing, and to understand that you need to take a step back and take a more holistic view. It’s in doing this that you can start to break down those “this is how we’ve always done it” misconceptions, because they most commonly arise from a misunderstanding of the up- and down-stream impacts of the work a particular person is doing. If you’re looking for transformative change then my advice every time would be to get subject matter experts from every aspect of a process together to talk the whole thing through – I guarantee you’ll find waste, and probably lots of it. And if your process doesn’t start and end with the customer (in purest sense of the word), you’ve defined it too narrow a way for this phase of your work.

Catering to the Lowest Common Denominator

“Think of the dumbest person you work with, and let’s build a process they can complete with ease.”

I’ve personally uttered those words in the past during sessions I’ve run as part of BPR initiatives, and I suspect I’ll do so again in the future. Getting people to think like this is fine, but it’s a powerful tool and you have to be very careful not to misuse it.

If you’re talking about designing a user-interface perhaps, or discussing building automated error-proofing functionality into a system – absolutely, let’s make that stuff as intuitive (read: idiot-proof) as possible.

All too often, this gets taken too far and results in masses of waste and redundancy being built into business processes. If I complete a task that I then hand off to somebody else only so that they check every aspect of my work, that’s a problem. A big one. The company would be better off taking my paycheque every two weeks and burning it. They could use the fire to help heat the building, and they also wouldn’t have to deal with the massive attitude problem I’d develop through being part of a system that places no trust in the work that I do. The result of which is the problem becoming self-perpetuating: Now I’m not only disengaged, I’m not accountable for my work in any kind of meaningful way – if I do a bad job then the person who’s checking everything I do will fix it anyway, so what does it matter? So of course I do a bad job, why would I bother trying to do anything else? “Ha!” say the powers that be, “told you it was necessary to check over everything. We find errors in 65% of the tasks that Jason completes.”

So how do you tackle this from a BPR perspective? Two ways. Firstly, you have to trust people to do the work they were hired to do. That’s easier said than done sometimes, but as a Business Analyst (probably) dealing with people’s performance issues (whether perceived or real) and coaching is unlikely to be within your remit anyway, so why make it your business? If there’s one thing I’ve learned about people it’s that you get from them what you expect to get from them. If you have low expectations and treat people like idiots, you get back poor performance. If you have high expectations and provide people with autonomy and, most importantly, accountability, then you’ll get back the high performance you’re asking for. Quite aside from all that, if the “lowest common denominator” that you have in mind when you’re designing a process really does require that a second person checks over everything they do then you should fire that person and start over with a new “lowest common denominator” in mind.

Now that we have an accountable, high-performing workforce completing our process the second thing we need to do is allow for some mistakes. If we demand perfection from the output of our process then rework and unnecessary checks will almost inevitably be the price we have to pay within the process. Six sigma (which is what we’re all striving for these days, right?) allows for a defect rate of 3.4 parts per million, and that comes from the world of manufacturing where machines are doing much of the actual work. If your people-based process really expects that level of accuracy then my recommendation would be turn your attention away from increasing the process’ sigma level and figure out a better method of handling expectations and dealing with the inevitable defects instead.