Evaluating with the Success Case Method

In my last post I mentioned that I prefer the Success Case Method for evaluating learning (and other) interventions to the Kirkpatrick approach. A few readers contacted me asking for information on the method and why I prefer it. Here’s a bit of both.

About the Success Case Method

The method was developed by Robert Brinkerhoff as an alternative (or supplement) to the Kirkpatrick approach and its derivatives. It is very simple and fast (which is part of it’s appeal) and goes something like this:

Step 1. Identify targeted business goals and impact expectations

Step 2. Survey a large representative sample of all participants in a program to identify high impact and low impact cases

Step 3. Analyze the survey data to identify:

  • a small group of successful participants
  • a small group unsuccessful participants

Step 4. Conduct in-depth interviews with the two selected groups to:

  • document the nature and business value of their application of learning
  • identify the performance factors that supported learning application and obstacles that prevented it.

Step 5. Document and disseminate the story

  • report impact
  • applaud successes
  • use data to educate managers and organization

The process produces two key outputs

  • In-depth stories of documented business effect that can be disseminated to a variety of audiences
  • Knowledge of factors that enhance or impede the effect of training on business results. Factors that are associated with successful application of new skills are compared and contrasted with those that impede training.

It answers practical and common questions we have about training and other initiatives:

  • What is really happening? Who’s using what, and how well? Who’s not using things as planned? What’s getting used, and what isn’t? Which people and how many are having success? Which people and how many are not?
  • What results are being achieved? What value, if any, is being realized? What goals are being met? What goals are not? Is the intervention delivering the promised and hoped for results? What unintended results are happening?
  • What is the value of the results? What sort of dollar or other value can be placed on the results? Does the program appear to be worthwhile? Is it producing results worth more than its costs? What is its return on investment? How much more value could it produce if it were working better?
  • How can it be improved? What’s helping? What’s getting in the way? What could be done to get more people to use it? How can everyone be more like those few who are most successful?

Here’s a good Brinkerhoff article from a 2005 issue of Advances in Developing Human Resources on the method. The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training

There are some important differences between Kirkpatrick Based Methods and the Success Case Method. The following table developed by Brinkerhoff differentiates the two approaches.

Why I like it

Here are five reasons:

1. Where Kirkpatrick (and Philips and others) focus on gathering proof of learning effectiveness and performance impact using primarily quantitative and statistical measures, the Success Case Method focuses on gathering compelling evidence of effectiveness and impact through qualitative methods and naturalistic data gathering. Some organizational decisions require hard proof and statistical evidence. In my experience training is not one of them. At best, training decisions are usually judgment calls using best available information at the time. Statistical proof is often overkill and causes  managers  to look at each other in amusement.  All they really need is some good evidence, some examples of where things are going well and where they aren’t. They are happy to trade statistical significance for authentic verification from real employees.

2. We spend a lot of time twisting ourselves in knots trying to isolate the effects of training from other variables that mix with skills to impact performance. Factors such as opportunity to use the skills, how the skills are supported,  consequences of using the skills and others all combine to produce performance impact. Only we are hell bent on separating these factors. Our clients (internal and external) are interested only in the performance improvement. In the end it is irrelevant to them whether it was precisely the training that produced the improvement. They simply would like some confirmation that an intervention improved performance, and when it didn’t how we can modify it and other variables to make it work. Success case method accepts that other factors are at work when it comes to impact on performance and concentrates on the impact of the overall intervention.

3. The approach can be used for any type of intervention designed to improve performance, including training, performance support systems, information solutions, communities of practice, improved feedback systems, informal and semi-structured learning initiatives and social learning initiatives.

4. Success Case Method results are documented and presented as “stories”. We have learned the power of stories for sharing knowledge in recent years. Why not use the same approach to share our evaluation results instead of the dry and weighty tombs of analysis we often produce

5. It’s fast and it’s simple and has a growing track record.

To learn more:

The Success Case Method: Find Out Quickly What’s Working and What’s No

Telling Training’s Story: Evaluation Made Simple, Credible, and Effective

High Impact Learning: Strategies For Leveraging Performance And Business Results From Training Investments

Evaluating Training and Learning Circa 2011

In a recent article in CLO magazine Dan Pontefract questioned the value of traditional training evaluation and the Kirkpatrick approach in particular (article re-posted here).  The article raised the ire of the Kirkpatrick organization and Dan responded in a follow-up post .  Others had observations on the post  (see  Don Clark and Harold Jarche.) I’ve been involved in many evaluation efforts over the years, both useful and ill-advised, and have some thoughts to impose on you.

To summarize the positions I’ll paraphrase Dan and Wendy Kirkpatrick  (probably incorrectly but this debate happens so often I’m using Dan and Wendy more as archetypal voices for both sides of the argument)

Dan: Learning is a continuous, connected and collaborative process.  It is part formal, part informal and part social.  Current evaluations methods are dated, focused only on formal learning events, and need to be tossed.   (He doesn’t say it but I think he would place less importance on evaluation in the growing world of social learning)

Wendy (Kirkpatrick): Formal training is the foundation of performance and results.  It must be evaluated in measurable terms. Clearly defined results will increase the likelihood that resources will be most effectively and efficiently used to accomplish the mission.  (She doesn’t say it but I think she would suggest social learning, when considered at all, is simply in a supporting role to formal training.)

On the surface it sounds like they couldn’t be more polarized, like much of the current debate regarding formal vs. informal learning. Here are some thoughts that might help find some common ground (which, I’ll admit, isn’t as much fun as continuing to polarize the issue).

Confusing Training and Learning muddies the purpose of evaluation

In the last 10 years or so we’ve moved away from the language of training and instruction, with it’s prescriptive and objectivist underpinnings (boo!) to the softer language of learning, most recently of the social variety (yea!).  Most “training” departments changed their moniker to “learning” departments to imply all the good stuff, but offer essentially the same set of (mostly formal) learning services.  Learning is the new training and this has confused our views of evaluation.

Learning (as I’m sure both Dan and Wendy would agree) truly is something we do every day, consciously, unconsciously, forever and ever, amen.  We are hard wired to learn by  adopting a goal, taking actions to accomplish the goal (making a decision, executing a task, etc) and then making adjustments based the results of our actions.  We refine these actions over time with further feedback until we are skilled or expert in a domain. This is learning.

Training is our invention to speed up this learning process by taking advantage of what has already been learned and freeing people from repeating the errors of others.   In business fast is good.  Training, at least in theory, is the fast route to skilled performance versus the slow route of personal trial and error.  It works very well for some tasks (routine) and less well for others (knowledge work and management development).   Ironically, by stealing training from the hands of managers and from early mentor/apprenticeship approaches we may have stolen its soul (but I digress).

In any case, like it or not, in an organizational setting, training and learning are both means to an end–individual and organizational performance.  And performance provides a better filter to make decisions about evaluating than a focus on training/learning.

Should we evaluate training?

If it’s worth the considerable cost to create and deliver training programs it’s worth knowing if they are working,  even (maybe especially) when the answer is no.  With growing emphasis on accountability it hard to justify anything else.  Any business unit, Training/Learning included, needs to be accountable for effective and efficient delivery of its services.

The Kirkpatrick Framework (among others) provides a rational process for doing that but we get overzealous in the application of the four levels.  In the end, it’s only the last level that really matters (performance impact) and that is the level we least persue.   And I don’t know about you, but I’ve rarely been asked for proof that a program is working.  Senior management operates on judgment and best available data for decision making far more than any rigorous analysis.  When we can point to evidence and linkages in performance terms that our training programs are working that’s all we usually need.  I prefer Robert Brinkerhoff’s Success Case Method for  identifying evidenceof training success (vs. statistical proof ) and for using the results of the evaluation for continuous improvement.

Unlike Dan, I’m happy to hear the Kirkpatrick crew has updated their approach to be used in reverse as a planning tool.  It’s not a new innovation however. It’s been a foundation of good training planning for years.  It puts the emphasis on proactively forecasting the effectiveness of a training initiative rather than evaluating it in the rear view mirror.

Should we evaluate social learning?

It gets slippery here, but stay with me.  If we define learning as I did above,  and as as many people do when discussing social learning, then I think it’s folly to even attempt Kirkpatrick style evaluation.  When learning is integrated with work, lubricated by the conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements.   Learning in the broadest sense is simply the human activity carried out in the achievement of performance goals.  Improved performance is the best evidence of team learning.  This chart from Marvin Weisbord’s Productive Workplaces: Organizing and Managing for Dignity, Meaning and Community illustrates the idea nicely:


In his post Dan suggests some measures for social learning:

“Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism”

Interesting as it may be to collect this information, they are all measures of activity reminiscent of the type of detailed activity data gathered by Learning Management Systems.  Better I think to implement social learning interventions and observe how it impacts standard business results.  Social Learning is simply natural human behavior that we happen to have a very intense microscope on at the moment.  To evaluate and measure it would suck dry it’s very human elements.

Evaluation should inform decision-making

Evaluation is meant to inform decisions. We should measure what we can and use it in ways that it doesn’t bias what we can’t.   The Kirkpatrick approach (and others that have expanded on it over the years), have provided a decent framework to think about what we should expect from training and other informal learning interventions.

However, myopic and overly rigorous measurement can drive out judgment and cause us to start measuring trees and forget about the forest.   Thinking about organizational learning as a continuum of possible interventions rather that the abstract dichotomy between formal and informal  learning will help us better decide appropriate evaluation strategies matched to the situation.  Whew! Maybe we need to evaluate the effectiveness of evaluation 🙂