I’ve just finished six days training in advanced statistics. The course focused on understanding the data we collect and making sure that we collect the right sort of information. We also discussed how to interrogate the numbers to fully understand how a process is working (both current state and following improvement).
As you might expect the statistics often prove what common sense/ gut instinct might have prompted us to look at e.g. days with less staff lead to longer customer waiting times. “The theory of probabilities is at bottom nothing but common sense reduced to calculus. “Laplace, Théorie analytique des probabilités, 1820. Of course, data can also be used to prove/ disprove a lot of theories, and when a project team has certain ideas about what may be causing a problem with a transactional process we can use the data to give a shared understanding of what is a problem and indeed, what is the biggest problem we need to focus on.
For many of our process improvement projects we do not have a lot of accessible data about how the process is behaving; so often we need to collect it from scratch. One of the biggest challenges is collecting sufficient amount to be statistically useful without overloading staff that are involved in processes with additional workload. Frequently, we would love to have more, but we need to ensure that our collection methods are lean and focused only on the scope of the improvement project. To me one of the biggest advantages of using statistical measures is following on from the improvement phase, providing staff with basic control charts or building in simple data collection methods that enable people to monitor their processes and continue to identify problems and areas for future improvement.