<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=749646578535459&amp;ev=PageView&amp;noscript=1">

Webinar | Metrics & Statistics Don't Have to be Scary

Posted by Ryan Rippey

Find me on:

Mar 5, 2019 7:01:00 AM

Mark New HeadshotA while back, we recently had the honor of hosting a webinar with our Senior Advisor, Mark Graban, author of "Measures of Success: React Less, Lead Better, Improve More." Mark’s new book focuses on managing variation, understanding data, and leading improvement. Mark has been a part of KaiNexus in various capacities since 2011. He is an internationally-recognized consultant, published author, professional speaker, and blogger. Mark has a Bachelor of Science in industrial engineering from Northwestern and a Masters in mechanical engineering and an MBA from MIT.

This post is a recap of Mark’s presentation. However, the webinar is full of examples that will make a lot more sense if you see the charts that he uses by way of explanation, so we highly recommend that you watch the webinar.

Metrics & Statistics Don't Have to be Scary

Presented by Mark Graban, author of 
"Measures of Success: React Less, Lead Better, Improve More"

Watch Now


In this webinar, you will learn:

  • How to replace your orange and black -- I mean red and green -- boo-wling charts with Process Behavior Charts
  • How Process Behavior Charts helped our director of marketing and CEO stop howling at the moon every time a metric was a little bit worse
  • How you can apply these decidedly non-spooky methods in any type of organization

 



Why do metrics and statistics seem scary?

When organizations start tracking data, leaders can get caught up in the red and green of metrics. Managers roam the halls looking for charts and demanding root cause reviews for every item that’s in the red. That’s unnecessarily frightening for workers because not every item that’s in the red is worth reacting to in that way. There are ways to distinguish between meaningful changes in a metric and what you might call noise. We can start to look at our metrics in a way that is not just a breakdown of red or green. Because when we overreact to every red or worse than average data point, when we make people look for a root cause for every small change in a metric, we create black holes of wasted time.

The Facts of Life

Every metric has variation. Many leaders and improvement managers are of the mindset that a consistent process should deliver consistent results. But the fact is that a consistent process will always generate results with some level of variation. The real question is, “How much variation?”

As an example, at KaiNexus, we track key metrics and use control charts all the time. One of the things we follow is the number of people who register for our webinars. If you look at a control chart of our webinar registrations, you will see that there is variation. We can’t always put the finger on the causes of that variation. You would notice that about half the time we’re above average, and half the time we are below average, which is about what one would expect.

We did have one webinar in 2017 that had a big spike in registrations, pushing it above the upper control limit of our chart. It was a webinar by Jess Orr on how to use A3 thinking in everyday life. (Watch it. It's great.)  We think the spike in registrations was due to great LinkedIn promotion, but when we tried to apply those same tactics to a follow-up webinar, we didn’t get anywhere near the number of registrations. So when a chart like this shows us the difference between signal and noise, that signal is worth reacting to, but you might not always find the answer to why you had a point that’s out of control.

The use of process control or behavior charts should teach us not to get too excited, not to get too upset and not to freak out about noise in our metric. We can use these charts to determine if we see fluctuation or a meaningful shift.

 

The Value of Visualization

Control ChartSometimes organizations post a list of numbers that indicate the results of a process and call it a dashboard. But it’s tough to use a list of numbers to spot trends. We can’t distinguish between signal and noise. You may have a table where the results are half red and half green. Are the reds really worth reacting to? Do you have routine variation that’s mostly noise or do we have meaningful signals that are really worth paying attention to? Importantly, we also want to know if we can predict what the performance of a future metric is going to be based on past data.

What we want our visual tools to do is to help us determine whether we are achieving our target goals. Are we meeting our goals sometimes, never, or consistently? We also need to answer the question of are we improving. Finally, we need to know how we can improve. People shouldn’t always be doing that panicked root cause analysis response to noise in the system. That ends up being a time suck. But, when we have a stable system that’s not consistently achieving our goal, we can ask questions about how do we improve the system.

The process behavior chart methodology allows us to predict future results within a range unless the system changes. There are some rules we can use to tell us if there’s a signal of a system change. If we’ve intentionally made a change to the system, we can use process behavior charts to help improve some of the cause and effect analysis.


The Math Zone

When you are looking for a signal in a process behavior chart, you don’t just guess. We are looking for signals based on math. There are three rules for detecting a signal in the chart. Rule one is looking for any data point that is above or below the calculated process control limits. When that occurs, you have a strong signal that something has changed in the system. It is unlikely to have happened by chance. Rule two is looking for eight or more consecutive data points that are above or below the average. That’s also statistically unlikely to happen randomly. It’s a signal that there’s been a shift in the process. Rule three looks for a clustering of three consecutive or three out of four points that are closer to the limit than they are to the average.

Common Mistakes

We can make mistakes when we overreact to noise, or when we miss signals. We waste a lot of time when we overreact to noise, and when we miss signals, we lose opportunities to improve the performance of our system. Another mistake is confusing the natural process limits with a goal or target. And finally, some people use a calculated standard deviation to calculate the control limits. This is not recommended for the purpose of measuring process health.

We hope this explanation has you feeling a little less fearful of using data to drive decisions and foster improvement. As we mentioned, the webinar has a ton of specific examples, so we hope you’ll check it out.

Topics: Quality, Webinars, Visual Management

Recent Posts