Blog

Yesterday, I wrote about the Requirements 101 presentation I gave to my team about what I believe makes for good solution requirements.

(I was not able to limit myself to the 15 minutes I devoted to this as an agenda item because I like the sound of my own voice way too much, but that’s beside the point right now).

The important thing is that I generated some great discussion, which is exactly what I was hoping for. This was not intended to be a lecture, especially given that there are people in my group who are far better at this stuff than I am. The slide above prompted some great input.

I’d argued that the requirement “the monthly transaction report must be available on the next business day after the end of the calendar month” was a bad one, but I was intentionally tricking people. On the face of it there’s nothing wrong with this, and it was a trick because after soliciting feedback on the requirement and getting everybody’s input I only then let them know that the report in question takes 30 hours to generate and therefore, I argued, the requirement was not achievable. I said that issues could have been avoided by having the right people (probably technical SMEs of some description) at the table during the requirements phase of work.

Some people pushed back and said that if this really was a requirement of the hypothetical project to which it’s attached then work would simply have to be undertaken to reduce the time taken to generate the report. If, on the other hand, the project didn’t have the time and/or budget to support this work then that would be a separate issue to a certain extent, and there would be courses of action other than removing the requirement that could be pursued – but that this doesn’t make the requirement any less valid. People argued that it’s hard (if not impossible) to know that you need some additional technical input at this stage of the process without the benefit of hindsight. “You don’t know what you don’t know,” as one person succinctly summed up.

These are excellent points. In fact, often when I’m helping people document their initial requirements for a project I like to tell them (with my tongue firmly in my cheek) that anything is achievable, it will merely come down to how much time and money they have.

My point in including the example in my slide-deck is that I do believe there are opportunities to validate things like this before requirements are finalized and signed-off by stakeholders. If we are able to take advantage of these opportunities to move conversations like this one forwards in the project timeline then it will avoid back-and-forth between business and technical teams, avoid costly rework, and avoid nasty surprises further down the line.

I still feel that’s all true and that my point is a valid one, but of course let’s be realistic – how much effort do we really want to spend validating that each individual requirement is achievable considering every known and as-yet-unknown constraint (bearing in mind that we haven’t even moved in to the “execution” phase of work at this point in our story and the solution hasn’t been designed)? Should we really wait to secure the availability of a highly-sought technical resource to sit in meetings where they will only have minimal input to provide? Wouldn’t it be more efficient to get the stamp of approval in our requirements as-is and move forwards, allowing the solution architects to identify issues like this later (and suggest where compromises or alternative approaches may be necessary or beneficial)?

I suspect the answer – as is so often the case with the questions I pose on this blog – is all about finding an appropriate balance, but I don’t have any solid guidance here for you all.

What are your thoughts?

Blog

Yesterday, I wrote about the Requirements 101 presentation I gave to my team about what I believe makes for good solution requirements.

(I was not able to limit myself to the 15 minutes I devoted to this as an agenda item because I like the sound of my own voice way too much, but that’s beside the point right now).

The important thing is that I generated some great discussion, which is exactly what I was hoping for. This was not intended to be a lecture, especially given that there are people in my group who are far better at this stuff than I am. The slide above prompted some great input.

I’d argued that the requirement “the monthly transaction report must be available on the next business day after the end of the calendar month” was a bad one, but I was intentionally tricking people. On the face of it there’s nothing wrong with this, and it was a trick because after soliciting feedback on the requirement and getting everybody’s input I only then let them know that the report in question takes 30 hours to generate and therefore, I argued, the requirement was not achievable. I said that issues could have been avoided by having the right people (probably technical SMEs of some description) at the table during the requirements phase of work.

Some people pushed back and said that if this really was a requirement of the hypothetical project to which it’s attached then work would simply have to be undertaken to reduce the time taken to generate the report. If, on the other hand, the project didn’t have the time and/or budget to support this work then that would be a separate issue to a certain extent, and there would be courses of action other than removing the requirement that could be pursued – but that this doesn’t make the requirement any less valid. People argued that it’s hard (if not impossible) to know that you need some additional technical input at this stage of the process without the benefit of hindsight. “You don’t know what you don’t know,” as one person succinctly summed up.

These are excellent points. In fact, often when I’m helping people document their initial requirements for a project I like to tell them (with my tongue firmly in my cheek) that anything is achievable, it will merely come down to how much time and money they have.

My point in including the example in my slide-deck is that I do believe there are opportunities to validate things like this before requirements are finalized and signed-off by stakeholders. If we are able to take advantage of these opportunities to move conversations like this one forwards in the project timeline then it will avoid back-and-forth between business and technical teams, avoid costly rework, and avoid nasty surprises further down the line.

I still feel that’s all true and that my point is a valid one, but of course let’s be realistic – how much effort do we really want to spend validating that each individual requirement is achievable considering every known and as-yet-unknown constraint (bearing in mind that we haven’t even moved in to the “execution” phase of work at this point in our story and the solution hasn’t been designed)? Should we really wait to secure the availability of a highly-sought technical resource to sit in meetings where they will only have minimal input to provide? Wouldn’t it be more efficient to get the stamp of approval in our requirements as-is and move forwards, allowing the solution architects to identify issues like this later (and suggest where compromises or alternative approaches may be necessary or beneficial)?

I suspect the answer – as is so often the case with the questions I pose on this blog – is all about finding an appropriate balance, but I don’t have any solid guidance here for you all.

What are your thoughts?

Blog

Requirements 101

Every two weeks my team (by which I mean my peers as defined by the org-chart, rather than the team from a particular project I may be working on) has a team meeting.

We talk about what we’re each working on and what we have coming up, we take some time to celebrate our accomplishments, discuss any issues or barriers… you know the kind of thing.

Anyway, we take turns to host and facilitate the meeting, and today it’s my turn. Part of the expectation in hosting is that I introduce an agenda item of my choosing that the cross-functional group (consisting of operations and projects people) might find interesting or beneficial.

I decided to talk about what makes for good requirements. In preparation for this I read a research paper that tells me that 68% of companies have created an environment where project success is “improbable” due to poor business analysis capability.

Requirements are important, then. Who knew?

It was tough to limit myself to a mere 15 minutes on this topic because of course I could talk for hours, but I managed somehow (or rather – since I’m actually writing this post in advance of the meeting – I’m sure I probably will).

Since sharing is caring, I’ve embedded my slide-deck below. It works just like PowerPoint (click anywhere inside the image to move forwards).

https://onedrive.live.com/fullscreen?cid=47a99becbf28ab8d&id=documents&resid=47A99BECBF28AB8D%21149&filename=Requirements%20101.pptx&authkey=!&wx=p&wv=s&wc=officeapps.live.com&wy=y&wdModeSwitchTime=1420671946681

Sadly you won’t get the benefit of listening to my insightful commentary, but if you want to click through to the file on OneDrive you will at least be able to view my typed speaking notes on each slide (using the link in the bottom-right of the screen) if you wish.

Blog

Optimizing the Cost of Quality

About three months ago I wrote a post called The Two Most Poisonous BPR Dangers, in which I talked about why you shouldn’t necessarily design processes with the “lowest common denominator” in mind.

My argument was that if you demand perfection in the output of your processes, you’ll pay for it in terms of (amongst other things) unnecessary checks and rework embedded within the process itself.

image

Recently I’ve been taking a project management course at Mount Royal University here in Calgary, and one of my classes featured a little nugget of wisdom that was initially very surprising to me, and then immediately familiar. In short, it was something that should have been obvious to me all along. Nevertheless, it wasn’t something that I realized until somebody explicitly broke it down for me, and in case you’re in the same boat, allow me to pass on the favour to you.

Can you ever have too much quality in the output of your process? Yes you can, and I’m about to use actual math to explain why. Brace yourselves, and join me after the break.

Prevention vs. Cure

Conventional wisdom will tell you that the cost of preventing mistakes is lower than the cost of correcting them after the fact. Our goal in this story is lower costs, so extending that logic out would suggest that the way we optimize costs would be to demand perfection from our processes, right? Prevention is better than cure, so with 100% prevention of defects we don’t have to do any of that expensive fixing of things after the fact.

Actually though, this is not the case – it depends on how you frame things. Let’s imagine our process is a manufacturing one, and we’re producing widgets. The cost of making a single good widget and delivering it to our customer is certainly lower (and probably significantly so) than the cost of inadvertently producing a bad widget that gets shipped. If that unhappy process path is followed then now our customer service team needs to be engaged, the bad widget needs to be shipped back, and a replacement good widget sent out. All that stuff is expensive. Just think how much money we’d save if we didn’t have to go through all that rigmarole.

Here’s why this is too much of a simplification, though: we’re not making a single widget here, we’re making thousands, maybe millions of them. We can’t view our process only in terms of producing that single widget, we have to see this process how it is: something that’s repeatable time and time again, with each execution of it featuring some probability of producing a defect.

The Cost of Preventing Mistakes

Broadly speaking, the cost of preventing mistakes looks like this:

image

As we move closer to 100% quality in our process, costs start to rise exponentially.

This makes intuitive sense. Let’s say our widget producing processes are resulting in a happy outcome just 25% of the time. That’s very bad, but we can be optimistic! There’s at least much we can do about it – we could improve the machinery we use, we could add an inspector to pick out the bad ones and prevent them from reaching our customers, in fact almost any adjustment would be better than maintaining the status quo. Essentially there’s lots of low hanging fruit and we can likely make substantial improvements here without too much investment.

Now let’s say our production processes result in a happy outcome 99.9% of the time. We’re probably achieving this because we already have good machinery and an inspection team. How do we improve? Do we add another inspection team to check the work of the first? And a third to check the work of the second? Perhaps we engage the world’s top geniuses in the field of widgetry to help us design the best bespoke widget machinery imaginable? Whatever we do is going to be difficult and expensive, and once it’s done how much better are we really going to get? 99.95% quality?

This is the argument I was making in my previous post. Would further improvement here be worth it from a cost/benefit standpoint? It’s highly doubtful, but we’ll get to the answer soon!

The Cost of Fixing Mistakes

The cost of fixing mistakes looks a lot like this:

image

This makes intuitive sense too. As we move closer to 100% quality, the cost of fixing mistakes shrinks to nothing – because at that point there are no mistakes to fix.

Down at the other end of the chart on the left, we have problems. Back in our widget factory with a happy outcome 25% of the time, three quarters of our customers, upon receiving their widget, are finding it to be defective. They’re calling our customer service team, and asking for a replacement. Those guys are tired and overworked, and even once they’ve arranged for the defective widget to be returned there’s still a 75% chance that the replacement will be defective too and they’ll be getting another call to start the whole thing over.

Finding the Optimal Balance

In our version of the widget factory with a 99.9% happy outcome, things would seem pretty rosy for the business. There’s barely any demand on our customer service folks. We probably have just the one guy manning our 1-800 number. Let’s call him Brad. Brad’s work life is pretty sweet. He probably spends most of his day watching cat videos on YouTube while he waits around for the phone to ring.

If we were to spend the (lots of) money needed to increase quality to 99.95%, we’d probably still need Brad. He’d just get spend even more of his time being unproductive. We’d save some money on shipping replacement widgets, but really there’s very little payoff to the higher quality we’re achieving. We’ve managed to get ourselves into a position where conventional wisdom is flipped on its head: the cost of fixing a problem is less than the cost of preventing one.

This same construct applies to any process, not just manufacturing. So where, as business process engineers, do we find the balance? Math fans have already figured it out.

image

Cumulatively, costs are at their lowest where the two lines meet – where the money we spend on preventing mistakes is equal to the money we spend fixing them. For me this was the thing that was initially surprising, but should have been obvious all along. It seems so simple now, right?

Well, it isn’t. Determining where that meeting point actually falls is no easy task, but nevertheless the moral here is that as much as we all want to strive for perfection it actually doesn’t make business sense to achieve it! We should keep striving, because in reality the meeting point of the two lines on the chart is further to the right than the simplistic graph above might suggest – prevention is, as everybody knows, less expensive than cure and conventional wisdom exists for a reason – but if our strive for perfection comes at the exclusion of all else, if we refuse to accept any defects at all? Then we’re doing it wrong.

Blog

When It Comes To Facebook Scale, You Can Throw Out The Rulebook | TechCrunch

I read this techcrunch article this week about hardware engineering at Facebook. The nitty gritty details are only mildly interesting to me, but listen to what Facebook are saying about the culture they’ve created.

“We do it this way because this is the way we’ve always done it?”

Not at Facebook you don’t.

What’s the BPR equivalence? Can I achieve similar results through being a continual advocate for Kaizen as a process improvement methodology, or is this a cultural thing at FB that I can’t replicate on my own?

Get in touch or leave a comment to let me know!

When It Comes To Facebook Scale, You Can Throw Out The Rulebook | TechCrunch

Blog

I’m Perfectly Imperfect, Flawlessly Flawed

Following on from yesterday’s post featuring “lowest common denominators,” I was once assigned to a project in which “flawless execution” was a stated requirement.

Not a goal, or a vision, or something to strive for – there it was in black and white, right in the requirements document: Fail to execute flawlessly and we may as well not have bothered even showing up.

If I could have gotten away with just walking out immediately then I would have. Instead I snidely commented that I’ve never done anything flawlessly before, so I was excited to start.

Everyone stared at me blankly for thirty seconds and then we all moved on to the next item on the agenda.

Blog

The Two Most Poisonous BPR Dangers

A little while ago when I wrote about The Monkey Parable, I promised that I’d write more about what I see as being the two most poisonous BPR dangers you’ll face when attempting to re-engineer and optimize business process, at least from my experience of attempting to do so.

image

Well friends, the time has come. Join me, won’t you, after the break.

“This is How We’ve Always Done It”

The first danger I alluded to in a not especially transparent way when I relayed the monkey parable to you all: we do it this way because this is the way we’ve always done it.

This used to be an extremely prevalent problem in my organization. Less so these days thanks to a senior leadership team that specifically calls this thinking out and tackles it head on – but even so this kind of thinking still exists where I work, and I’m 99% confident in saying it exists where you work too.

Essentially the thinking here is that we’ve been doing something the same way for 20 years then there has to be an excellent reason for that, otherwise something would have changed long ago. The fact that nobody knows the excellent reason is seen as largely irrelevant.

So from a Business Process Management / Business Analysis perspective, how do you combat it? It’s all about perspective. It’s unlikely that any of the truly important processes in your organization are completed by a single person, and that makes perfect sense: somebody who shapes sheet metal to create a panel for a vehicle would be an extremely skilled craftsman, but that doesn’t mean they have the expertise to build a gearbox. Too often though we have too narrow a focus when we look at business processes.

Focus is only good if you know you’re focussing on the right thing, and to understand that you need to take a step back and take a more holistic view. It’s in doing this that you can start to break down those “this is how we’ve always done it” misconceptions, because they most commonly arise from a misunderstanding of the up- and down-stream impacts of the work a particular person is doing. If you’re looking for transformative change then my advice every time would be to get subject matter experts from every aspect of a process together to talk the whole thing through – I guarantee you’ll find waste, and probably lots of it. And if your process doesn’t start and end with the customer (in purest sense of the word), you’ve defined it too narrow a way for this phase of your work.

Catering to the Lowest Common Denominator

“Think of the dumbest person you work with, and let’s build a process they can complete with ease.”

I’ve personally uttered those words in the past during sessions I’ve run as part of BPR initiatives, and I suspect I’ll do so again in the future. Getting people to think like this is fine, but it’s a powerful tool and you have to be very careful not to misuse it.

If you’re talking about designing a user-interface perhaps, or discussing building automated error-proofing functionality into a system – absolutely, let’s make that stuff as intuitive (read: idiot-proof) as possible.

All too often, this gets taken too far and results in masses of waste and redundancy being built into business processes. If I complete a task that I then hand off to somebody else only so that they check every aspect of my work, that’s a problem. A big one. The company would be better off taking my paycheque every two weeks and burning it. They could use the fire to help heat the building, and they also wouldn’t have to deal with the massive attitude problem I’d develop through being part of a system that places no trust in the work that I do. The result of which is the problem becoming self-perpetuating: Now I’m not only disengaged, I’m not accountable for my work in any kind of meaningful way – if I do a bad job then the person who’s checking everything I do will fix it anyway, so what does it matter? So of course I do a bad job, why would I bother trying to do anything else? “Ha!” say the powers that be, “told you it was necessary to check over everything. We find errors in 65% of the tasks that Jason completes.”

So how do you tackle this from a BPR perspective? Two ways. Firstly, you have to trust people to do the work they were hired to do. That’s easier said than done sometimes, but as a Business Analyst (probably) dealing with people’s performance issues (whether perceived or real) and coaching is unlikely to be within your remit anyway, so why make it your business? If there’s one thing I’ve learned about people it’s that you get from them what you expect to get from them. If you have low expectations and treat people like idiots, you get back poor performance. If you have high expectations and provide people with autonomy and, most importantly, accountability, then you’ll get back the high performance you’re asking for. Quite aside from all that, if the “lowest common denominator” that you have in mind when you’re designing a process really does require that a second person checks over everything they do then you should fire that person and start over with a new “lowest common denominator” in mind.

Now that we have an accountable, high-performing workforce completing our process the second thing we need to do is allow for some mistakes. If we demand perfection from the output of our process then rework and unnecessary checks will almost inevitably be the price we have to pay within the process. Six sigma (which is what we’re all striving for these days, right?) allows for a defect rate of 3.4 parts per million, and that comes from the world of manufacturing where machines are doing much of the actual work. If your people-based process really expects that level of accuracy then my recommendation would be turn your attention away from increasing the process’ sigma level and figure out a better method of handling expectations and dealing with the inevitable defects instead.

Blog

You are NOT a Software Engineer!

Interesting. I’m not a software engineer or a gardener, but this certainly rings true based on my own experience of working with IT. Of course, there’s a reason enterprise analysis exists as a discipline and warrants its own chapter in the BABOK

Chris Aitchison:

You are NOT a Software Engineer!