I’ve written before about the increasing pressure to deliver real-world statistics for training that show how successful it has been.  Additionally, we have to ensure that there is a reasonable ROI on training investments.This trend has created a surprising and valuable side-effect that is providing exciting opportunities to improve overall the quality of our training. It can also help identify learners for whom the material was ineffective.

Here’s how it works. If you capture assessment data to a central repository, then inevitably over time, you wind up with a lot of it (startlingly good observation – Ed).

Imagine being able to see the average time it takes for a learner to complete a particular piece of training. The comparison is statistically valid if based on hundreds or thousands of other learners. You could then define a time threshold above which you assume that the learner was struggling in some way (although they may just have been playing with their phone!).

As the amount of data grows, there is increasing value to the statistics they create about assessments and other measurements. At some point, you can predict the likelihood of subsequent good performance based on assessment scores.

If the folks who achieve a certain score do better post-training than those who don’t, you can begin to correlate a predictive model that helps identify the people most likely to underperform.

If you’re an XAPI kinda person (and you should be), then it’s also possible to gather information in real time. You apply rules to this data that when triggered allow you to intervene and help a learner. For example, if someone is clearly below an arbitrary level of performance on some task, an instructor or your learning software can provide remedial help.

To make this work, you do need a statistically valid sample as well as a willingness to prune outliers and reaccess the trigger points of rules.

Hey, if you walk into a bar a thousand times, you’re bound to gather some data!