Blog

Optimizing the Cost of Quality

About three months ago I wrote a post called The Two Most Poisonous BPR Dangers, in which I talked about why you shouldn’t necessarily design processes with the ā€œlowest common denominatorā€ in mind.

My argument was that if you demand perfection in the output of your processes, youā€™ll pay for it in terms of (amongst other things) unnecessary checks and rework embedded within the process itself.

image

Recently I’ve been taking a project management course at Mount Royal University here in Calgary, and one of my classes featured a little nugget of wisdom that was initially very surprising to me, and then immediately familiar. In short, it was something that should have been obvious to me all along. Nevertheless, it wasn’t something that I realized until somebody explicitly broke it down for me, and in case youā€™re in the same boat, allow me to pass on the favour to you.

Can you ever have too much quality in the output of your process? Yes you can, and Iā€™m about to use actual math to explain why. Brace yourselves, and join me after the break.

Prevention vs. Cure

Conventional wisdom will tell you that the cost of preventing mistakes is lower than the cost of correcting them after the fact. Our goal in this story is lower costs, so extending that logic out would suggest that the way we optimize costs would be to demand perfection from our processes, right? Prevention is better than cure, so with 100% prevention of defects we donā€™t have to do any of that expensive fixing of things after the fact.

Actually though, this is not the case ā€“ it depends on how you frame things. Letā€™s imagine our process is a manufacturing one, and weā€™re producing widgets. The cost of making a single good widget and delivering it to our customer is certainly lower (and probably significantly so) than the cost of inadvertently producing a bad widget that gets shipped. If that unhappy process path is followed then now our customer service team needs to be engaged, the bad widget needs to be shipped back, and a replacement good widget sent out. All that stuff is expensive. Just think how much money weā€™d save if we didn’t have to go through all that rigmarole.

Hereā€™s why this is too much of a simplification, though: weā€™re not making a single widget here, weā€™re making thousands, maybe millions of them. We canā€™t view our process only in terms of producing that single widget, we have to see this process how it is: something thatā€™s repeatable time and time again, with each execution of it featuring some probability of producing a defect.

The Cost of Preventing Mistakes

Broadly speaking, the cost of preventing mistakes looks like this:

image

As we move closer to 100% quality in our process, costs start to rise exponentially.

This makes intuitive sense. Letā€™s say our widget producing processes are resulting in a happy outcome just 25% of the time. Thatā€™s very bad, but we can be optimistic! Thereā€™s at least much we can do about it ā€“ we could improve the machinery we use, we could add an inspector to pick out the bad ones and prevent them from reaching our customers, in fact almost any adjustment would be better than maintaining the status quo. Essentially thereā€™s lots of low hanging fruit and we can likely make substantial improvements here without too much investment.

Now letā€™s say our production processes result in a happy outcome 99.9% of the time. We’re probably achieving this because we already have good machinery and an inspection team. How do we improve? Do we add another inspection team to check the work of the first? And a third to check the work of the second? Perhaps we engage the world’s top geniuses in the field of widgetry to help us design the best bespoke widget machinery imaginable? Whatever we do is going to be difficult and expensive, and once it’s done how much better are we really going to get? 99.95% quality?

This is the argument I was making in my previous post. Would further improvement here be worth it from a cost/benefit standpoint? It’s highly doubtful, but we’ll get to the answer soon!

The Cost of Fixing Mistakes

The cost of fixing mistakes looks a lot like this:

image

This makes intuitive sense too. As we move closer to 100% quality, the cost of fixing mistakes shrinks to nothing ā€“ because at that point there are no mistakes to fix.

Down at the other end of the chart on the left, we have problems. Back in our widget factory with a happy outcome 25% of the time, three quarters of our customers, upon receiving their widget, are finding it to be defective. Theyā€™re calling our customer service team, and asking for a replacement. Those guys are tired and overworked, and even once they’ve arranged for the defective widget to be returned thereā€™s still a 75% chance that the replacement will be defective too and theyā€™ll be getting another call to start the whole thing over.

Finding the Optimal Balance

In our version of the widget factory with a 99.9% happy outcome, things would seem pretty rosy for the business. Thereā€™s barely any demand on our customer service folks. We probably have just the one guy manning our 1-800 number. Let’s call him Brad. Brad’s work life is pretty sweet. He probably spends most of his day watching cat videos on YouTube while he waits around for the phone to ring.

If we were to spend the (lots of) money needed to increase quality to 99.95%, weā€™d probably still need Brad. Heā€™d just get spend even more of his time being unproductive. We’d save some money on shipping replacement widgets, but really thereā€™s very little payoff to the higher quality we’re achieving. We’ve managed to get ourselves into a position where conventional wisdom is flipped on its head:Ā the cost of fixing a problem is less than the cost of preventing one.

This same construct applies to any process, not just manufacturing. So where, as business process engineers, do we find the balance? Math fans have already figured it out.

image

Cumulatively, costs are at their lowest where the two lines meet ā€“ where the money we spend on preventing mistakes is equal to the money we spend fixing them. For me this was the thing that was initially surprising, but should have been obvious all along. It seems so simple now, right?

Well, it isn’t. Determining where that meeting point actually falls is no easy task, but nevertheless the moral here is that as much as we all want to strive for perfection it actually doesn’t make business sense to achieve it! We should keep striving, because in reality the meeting point of the two lines on the chart is further to the right than the simplistic graph above might suggest ā€“ prevention is, as everybody knows, less expensive than cure and conventional wisdom exists for a reasonĀ ā€“Ā but if our strive for perfection comes at the exclusion of all else, if we refuse to accept any defects at all? Then weā€™re doing it wrong.

Blog

When It Comes To Facebook Scale, You Can Throw Out The Rulebook | TechCrunch

I read this techcrunch article this week about hardware engineering at Facebook. The nitty gritty details are only mildly interesting to me, but listen to what Facebook are saying about the culture they’ve created.

“We do it this way because this is the way we’ve always done it?”

Not at Facebook you don’t.

What’s the BPR equivalence? Can I achieve similar results through being a continual advocate for Kaizen as a process improvement methodology, or is this a cultural thing at FB that I can’t replicate on my own?

Get in touchĀ or leave a comment to let me know!

When It Comes To Facebook Scale, You Can Throw Out The Rulebook | TechCrunch

Blog

I’m Perfectly Imperfect, Flawlessly Flawed

Following on from yesterday’s post featuring “lowest common denominators,” I was once assigned to a project in which “flawless execution” was a stated requirement.

Not a goal, or a vision, or something to strive for – there it was in black and white, right in the requirements document: Fail to execute flawlessly and we may as well not have bothered even showing up.

If I could have gotten away with just walking out immediately then I would have. Instead I snidely commented that I’ve never done anything flawlessly before, so I was excited to start.

Everyone stared at me blankly for thirty seconds and then we all moved on to the next item on the agenda.

Blog

The Two Most Poisonous BPR Dangers

A little while ago when I wrote about The Monkey Parable, I promised that Iā€™d write more about what I see as being the two most poisonous BPR dangers youā€™ll face when attempting to re-engineer and optimize business process, at least from my experience of attempting to do so.

image

Well friends, the time has come. Join me, wonā€™t you, after the break.

ā€œThis is How Weā€™ve Always Done Itā€

The first danger I alluded to in a not especially transparent way when I relayed the monkey parable to you all: we do it this way because this is the way weā€™ve always done it.

This used to be an extremely prevalent problem in my organization. Less so these days thanks to a senior leadership team that specifically calls this thinking out and tackles it head on ā€“ but even so this kind of thinking still exists where I work, and Iā€™m 99% confident in saying it exists where you work too.

Essentially the thinking here is that weā€™ve been doing something the same way for 20 years then there has to be an excellent reason for that, otherwise something would have changed long ago. The fact that nobody knows the excellent reason is seen as largely irrelevant.

So from a Business Process Management / Business Analysis perspective, how do you combat it? Itā€™s all about perspective. Itā€™s unlikely that any of the truly important processes in your organization are completed by a single person, and that makes perfect sense: somebody who shapes sheet metal to create a panel for a vehicle would be an extremely skilled craftsman, but that doesnā€™t mean they have the expertise to build a gearbox. Too often though we have too narrow a focus when we look at business processes.

Focus is only good if you know youā€™re focussing on the right thing, and to understand that you need to take a step back and take a more holistic view. Itā€™s in doing this that you can start to break down those ā€œthis is how weā€™ve always done itā€ misconceptions, because they most commonly arise from a misunderstanding of the up- and down-stream impacts of the work a particular person is doing. If youā€™re looking for transformative change then my advice every time would be to get subject matter experts from every aspect of a process together to talk the whole thing through ā€“ I guarantee youā€™ll find waste, and probably lots of it. And if your process doesnā€™t start and end with the customer (in purest sense of the word), youā€™ve defined it too narrow a way for this phase of your work.

Catering to the Lowest Common Denominator

ā€œThink of the dumbest person you work with, and letā€™s build a process they can complete with ease.ā€

Iā€™ve personally uttered those words in the past during sessions Iā€™ve run as part of BPR initiatives, and I suspect Iā€™ll do so again in the future. Getting people to think like this is fine, but itā€™s a powerful tool and you have to be very careful not to misuse it.

If youā€™re talking about designing a user-interface perhaps, or discussing building automated error-proofing functionality into a system ā€“ absolutely, letā€™s make that stuff as intuitive (read: idiot-proof) as possible.

All too often, this gets taken too far and results in masses of waste and redundancy being built into business processes. If I complete a task that I then hand off to somebody else only so that they check every aspect of my work, thatā€™s a problem. A big one. The company would be better off taking my paycheque every two weeks and burning it. They could use the fire to help heat the building, and they also wouldnā€™t have to deal with the massive attitude problem Iā€™d develop through being part of a system that places no trust in the work that I do. The result of which is the problem becoming self-perpetuating: Now Iā€™m not only disengaged, Iā€™m not accountable for my work in any kind of meaningful way ā€“ if I do a bad job then the person whoā€™s checking everything I do will fix it anyway, so what does it matter? So of course I do a bad job, why would I bother trying to do anything else? ā€œHa!ā€ say the powers that be, ā€œtold you it was necessary to check over everything. We find errors in 65% of the tasks that Jason completes.ā€

So how do you tackle this from a BPR perspective? Two ways. Firstly, you have to trust people to do the work they were hired to do. Thatā€™s easier said than done sometimes, but as a Business Analyst (probably) dealing with people’s performance issues (whether perceived or real) and coaching is unlikely to be within your remit anyway, so why make it your business? If thereā€™s one thing Iā€™ve learned about people itā€™s that you get from them what you expect to get from them. If you have low expectations and treat people like idiots, you get back poor performance. If you have high expectations and provide people with autonomy and, most importantly, accountability, then youā€™ll get back the high performance youā€™re asking for. Quite aside from all that, if the ā€œlowest common denominatorā€ that you have in mind when youā€™re designing a process really does require that a second person checks over everything they do then you should fire that person and start over with a new ā€œlowest common denominatorā€ in mind.

Now that we have an accountable, high-performing workforce completing our process the second thing we need to do is allow for some mistakes. If we demand perfection from the output of our process then rework and unnecessary checks will almost inevitably be the price we have to pay within the process. Six sigma (which is what weā€™re all striving for these days, right?) allows for a defect rate of 3.4 parts per million, and that comes from the world of manufacturing where machines are doing much of the actual work. If your people-based process really expects that level of accuracy then my recommendation would be turn your attention away from increasing the processā€™ sigma level and figure out a better method of handling expectations and dealing with the inevitable defects instead.

Blog

The Monkey Parable

My friend Andrew told me what I now call ā€œthe monkey parableā€ several years ago, and itā€™s stuck with me ever since.

There are five monkeys, locked in a cage. Thereā€™s a banana hanging from the ceiling and a ladder set up on the floor.

Predictably, one of the monkeys immediately starts to climb the ladder in an attempt to get the banana, at which point *FOOOSH* the monkey gets sprayed with icy-cold water from a hose. This repeats as the other monkeys try climbing the ladder, only to each get sprayed. The monkeys give up, resigning themselves to the fact that the banana is unobtainable.

Next, one of the monkeys in the cage gets replaced. The new monkey sees the banana and the ladder and starts to climb. Right away the other four monkeys, familiar with the consequences grab the new guy, pull him off the ladder and beat him. New guy gets the message ā€“ the banana is off-limits.

This process then repeats as, over time, each of the monkeys gets replaced. Each new monkeyā€™s first instinct is to reach for the banana, at which point heā€™s immediately grabbed and pulled away by his peers.

Eventually, none of the original monkeys are left. There are still five monkeys in the cage, but none of them have ever been sprayed by the hose, and none of them are attempting to get the banana hanging from the ceiling.

When another new monkey is introduced to the cage and is prevented from attempting to reach the banana heā€™s confused, and he asks the existing monkeys why they beat him when he tries. The other four monkeys shrug their shoulders.

ā€œDonā€™t know, but thatā€™s the way we do things around here.ā€

You may be able to draw parallels between this and process improvement initiatives youā€™ve attempted to run. I certainly can. This parable illustrates one of what I believe are the two most poisonous BPR dangers, and Iā€™ll be writing more about them both in the not too distant future.

Watch this space!

Blog

The Monkey Parable

My friend Andrew told me what I now call ā€œthe monkey parableā€ several years ago, and itā€™s stuck with me ever since.

There are five monkeys, locked in a cage. Thereā€™s a banana hanging from the ceiling and a ladder set up on the floor.

Predictably, one of the monkeys immediately starts to climb the ladder in an attempt to get the banana, at which point *FOOOSH* the monkey gets sprayed with icy-cold water from a hose. This repeats as the other monkeys try climbing the ladder, only to each get sprayed. The monkeys give up, resigning themselves to the fact that the banana is unobtainable.

Next, one of the monkeys in the cage gets replaced. The new monkey sees the banana and the ladder and starts to climb. Right away the other four monkeys, familiar with the consequences grab the new guy, pull him off the ladder and beat him. New guy gets the message ā€“ the banana is off-limits.

This process then repeats as, over time, each of the monkeys gets replaced. Each new monkeyā€™s first instinct is to reach for the banana, at which point heā€™s immediately grabbed and pulled away by his peers.

Eventually, none of the original monkeys are left. There are still five monkeys in the cage, but none of them have ever been sprayed by the hose, and none of them are attempting to get the banana hanging from the ceiling.

When another new monkey is introduced to the cage and is prevented from attempting to reach the banana heā€™s confused, and he asks the existing monkeys why they beat him when he tries. The other four monkeys shrug their shoulders.

ā€œDonā€™t know, but thatā€™s the way we do things around here.ā€

You may be able to draw parallels between this and process improvement initiatives youā€™ve attempted to run. I certainly can. This parable illustrates one of what I believe are the two most poisonous BPR dangers, and Iā€™ll be writing more about them both in the not too distant future.

Watch this space!