Archives for category: learning evaluation

The Learning End GameTrap

Perhaps you’ve re-committed to improve learning as the mission of your department (or next big initiative, or…).  It’s well meaning but can be self defeating (or worse, self-fulfilling). The term leaves the impression that learning is the end game, your raison d’être. The real end game is performance; individual and organizational, defined in terms the business uses to measure itself. Sure, you don’t have control over all the factors that influence performance, but that doesn’t mean your solutions can’t be intimately connected to them. Thinking performance first is liberating and opens up whole new perspectives on the types of solutions you can and should be bringing to the table.

Antidote to the end game trap:  Performance Thinking (Cathy Moore and Carl Binder have nice methods for deriving learning from performance needs)

The Planning Trap

I used to believe in the perfect annual plan all wrapped in MBO goodness, aligned and linked to organizational objectives. But over time I’ve come to two conclusions. First, the plans are rarely fully realized. The more interesting innovations and strategies emerged from responses to opportunities throughout the year. Second, senior teams rarely have their act together enough to create strategies and business plans that are meaningful enough to wrap a good training plan around. Highly analytic planning processes can deceive us into thinking we are planning strategically and improving future organizational performance.

To borrow an argument from Henry Mintzberg, strategy is actually a pattern embodied in day to day work more than an annual plan. Strategy is consistency in behaviour, whether or not intended. Formal plans may go unrealized, while patterns may emerge from daily work. In this way strategy can emerge from informal learning. I’ve always liked this image of planning from Mintzberg:

from Henry Mintzberg “The Rise and Fall of Strategic Planning” (1994)

Antidote to the planning trap:  Beware the best laid plans. Go ahead and work with your business units to create a simple training plan linked to whatever business plans they may have in place. But have a rock solid process in place to respond to the requests that will inevitably come that are not in line with the plan. Be ready to develop solutions to adapt quickly to whatever white water your company or industry happens to be swimming in. Be nibble and flexible in response to business changes. Watch for patterns and successes in that work and incorporate them in your training strategy.

The Measurement and Evaluation trap

Evaluation is a hot button that causes more wringing of hands and professional guilt than it should. Evaluation is meant to inform decisions. Some types training are simply easier to measure than others. Everything can be measured, but not everything is worth measuring. When you do evaluate use business metrics already in use and explore methods focused more on collecting evidence of success rather than definitive proof. Myopic and overly rigorous measurement drives out judgment and causes us to start measuring trees and forget about the forest. Our attempts at evaluation are often disproportionate to evaluation elsewhere in the organization (we only think everyone else knows their ROI).

Antidote to the measurement trap: Don’t emphasize short term ROI or cost reduction measures at the expense of true investment in the future that do not have immediate and calculable ROI. When you do evaluate use existing measures that the business uses to judge success.

The Technology Trap

We seem to be hard wired to line up enthusiastically behind each new wave of technology. Each wave has left behind tools and innovations that changed learning for the better (and also, embarrassingly, for the worse). It offers increasingly wondrous ways to improve access to learning, immerse employees in true learning experiences, share learning in collaborative spaces and extend the tools we use to do our work. And it offers equally wondrous ways to create truly disastrous learning experiences.

Antidote for the technology trap: Understand and embrace technology, especially game changing social media, but protect yourself from panacea thinking and keep your eye on the prize of improved performance.  Success lies in the design not the technology.

In my last post I mentioned that I prefer the Success Case Method for evaluating learning (and other) interventions to the Kirkpatrick approach. A few readers contacted me asking for information on the method and why I prefer it. Here’s a bit of both.

About the Success Case Method

The method was developed by Robert Brinkerhoff as an alternative (or supplement) to the Kirkpatrick approach and its derivatives. It is very simple and fast (which is part of it’s appeal) and goes something like this:

Step 1. Identify targeted business goals and impact expectations

Step 2. Survey a large representative sample of all participants in a program to identify high impact and low impact cases

Step 3. Analyze the survey data to identify:

  • a small group of successful participants
  • a small group unsuccessful participants

Step 4. Conduct in-depth interviews with the two selected groups to:

  • document the nature and business value of their application of learning
  • identify the performance factors that supported learning application and obstacles that prevented it.

Step 5. Document and disseminate the story

  • report impact
  • applaud successes
  • use data to educate managers and organization

The process produces two key outputs

  • In-depth stories of documented business effect that can be disseminated to a variety of audiences
  • Knowledge of factors that enhance or impede the effect of training on business results. Factors that are associated with successful application of new skills are compared and contrasted with those that impede training.

It answers practical and common questions we have about training and other initiatives:

  • What is really happening? Who’s using what, and how well? Who’s not using things as planned? What’s getting used, and what isn’t? Which people and how many are having success? Which people and how many are not?
  • What results are being achieved? What value, if any, is being realized? What goals are being met? What goals are not? Is the intervention delivering the promised and hoped for results? What unintended results are happening?
  • What is the value of the results? What sort of dollar or other value can be placed on the results? Does the program appear to be worthwhile? Is it producing results worth more than its costs? What is its return on investment? How much more value could it produce if it were working better?
  • How can it be improved? What’s helping? What’s getting in the way? What could be done to get more people to use it? How can everyone be more like those few who are most successful?

Here’s a good Brinkerhoff article from a 2005 issue of Advances in Developing Human Resources on the method. The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training

There are some important differences between Kirkpatrick Based Methods and the Success Case Method. The following table developed by Brinkerhoff differentiates the two approaches.

Why I like it

Here are five reasons:

1. Where Kirkpatrick (and Philips and others) focus on gathering proof of learning effectiveness and performance impact using primarily quantitative and statistical measures, the Success Case Method focuses on gathering compelling evidence of effectiveness and impact through qualitative methods and naturalistic data gathering. Some organizational decisions require hard proof and statistical evidence. In my experience training is not one of them. At best, training decisions are usually judgment calls using best available information at the time. Statistical proof is often overkill and causes  managers  to look at each other in amusement.  All they really need is some good evidence, some examples of where things are going well and where they aren’t. They are happy to trade statistical significance for authentic verification from real employees.

2. We spend a lot of time twisting ourselves in knots trying to isolate the effects of training from other variables that mix with skills to impact performance. Factors such as opportunity to use the skills, how the skills are supported,  consequences of using the skills and others all combine to produce performance impact. Only we are hell bent on separating these factors. Our clients (internal and external) are interested only in the performance improvement. In the end it is irrelevant to them whether it was precisely the training that produced the improvement. They simply would like some confirmation that an intervention improved performance, and when it didn’t how we can modify it and other variables to make it work. Success case method accepts that other factors are at work when it comes to impact on performance and concentrates on the impact of the overall intervention.

3. The approach can be used for any type of intervention designed to improve performance, including training, performance support systems, information solutions, communities of practice, improved feedback systems, informal and semi-structured learning initiatives and social learning initiatives.

4. Success Case Method results are documented and presented as “stories”. We have learned the power of stories for sharing knowledge in recent years. Why not use the same approach to share our evaluation results instead of the dry and weighty tombs of analysis we often produce

5. It’s fast and it’s simple and has a growing track record.

To learn more:

The Success Case Method: Find Out Quickly What’s Working and What’s No

Telling Training’s Story: Evaluation Made Simple, Credible, and Effective

High Impact Learning: Strategies For Leveraging Performance And Business Results From Training Investments

Follow

Get every new post delivered to your Inbox.

Join 338 other followers