In a recent article in CLO magazine Dan Pontefract questioned the value of traditional training evaluation and the Kirkpatrick approach in particular (article re-posted here). The article raised the ire of the Kirkpatrick organization and Dan responded in a follow-up post . Others had observations on the post (see Don Clark and Harold Jarche.) I’ve been involved in many evaluation efforts over the years, both useful and ill-advised, and have some thoughts to impose on you.
To summarize the positions I’ll paraphrase Dan and Wendy Kirkpatrick (probably incorrectly but this debate happens so often I’m using Dan and Wendy more as archetypal voices for both sides of the argument)
Dan: Learning is a continuous, connected and collaborative process. It is part formal, part informal and part social. Current evaluations methods are dated, focused only on formal learning events, and need to be tossed. (He doesn’t say it but I think he would place less importance on evaluation in the growing world of social learning)
Wendy (Kirkpatrick): Formal training is the foundation of performance and results. It must be evaluated in measurable terms. Clearly defined results will increase the likelihood that resources will be most effectively and efficiently used to accomplish the mission. (She doesn’t say it but I think she would suggest social learning, when considered at all, is simply in a supporting role to formal training.)
On the surface it sounds like they couldn’t be more polarized, like much of the current debate regarding formal vs. informal learning. Here are some thoughts that might help find some common ground (which, I’ll admit, isn’t as much fun as continuing to polarize the issue).
Confusing Training and Learning muddies the purpose of evaluation
In the last 10 years or so we’ve moved away from the language of training and instruction, with it’s prescriptive and objectivist underpinnings (boo!) to the softer language of learning, most recently of the social variety (yea!). Most “training” departments changed their moniker to “learning” departments to imply all the good stuff, but offer essentially the same set of (mostly formal) learning services. Learning is the new training and this has confused our views of evaluation.
Learning (as I’m sure both Dan and Wendy would agree) truly is something we do every day, consciously, unconsciously, forever and ever, amen. We are hard wired to learn by adopting a goal, taking actions to accomplish the goal (making a decision, executing a task, etc) and then making adjustments based the results of our actions. We refine these actions over time with further feedback until we are skilled or expert in a domain. This is learning.
Training is our invention to speed up this learning process by taking advantage of what has already been learned and freeing people from repeating the errors of others. In business fast is good. Training, at least in theory, is the fast route to skilled performance versus the slow route of personal trial and error. It works very well for some tasks (routine) and less well for others (knowledge work and management development). Ironically, by stealing training from the hands of managers and from early mentor/apprenticeship approaches we may have stolen its soul (but I digress).
In any case, like it or not, in an organizational setting, training and learning are both means to an end–individual and organizational performance. And performance provides a better filter to make decisions about evaluating than a focus on training/learning.
Should we evaluate training?
If it’s worth the considerable cost to create and deliver training programs it’s worth knowing if they are working, even (maybe especially) when the answer is no. With growing emphasis on accountability it hard to justify anything else. Any business unit, Training/Learning included, needs to be accountable for effective and efficient delivery of its services.
The Kirkpatrick Framework (among others) provides a rational process for doing that but we get overzealous in the application of the four levels. In the end, it’s only the last level that really matters (performance impact) and that is the level we least persue. And I don’t know about you, but I’ve rarely been asked for proof that a program is working. Senior management operates on judgment and best available data for decision making far more than any rigorous analysis. When we can point to evidence and linkages in performance terms that our training programs are working that’s all we usually need. I prefer Robert Brinkerhoff’s Success Case Method for identifying evidenceof training success (vs. statistical proof ) and for using the results of the evaluation for continuous improvement.
Unlike Dan, I’m happy to hear the Kirkpatrick crew has updated their approach to be used in reverse as a planning tool. It’s not a new innovation however. It’s been a foundation of good training planning for years. It puts the emphasis on proactively forecasting the effectiveness of a training initiative rather than evaluating it in the rear view mirror.
Should we evaluate social learning?
It gets slippery here, but stay with me. If we define learning as I did above, and as as many people do when discussing social learning, then I think it’s folly to even attempt Kirkpatrick style evaluation. When learning is integrated with work, lubricated by the conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements. Learning in the broadest sense is simply the human activity carried out in the achievement of performance goals. Improved performance is the best evidence of team learning. This chart from Marvin Weisbord’s Productive Workplaces: Organizing and Managing for Dignity, Meaning and Community illustrates the idea nicely:
In his post Dan suggests some measures for social learning:
“Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism”
Interesting as it may be to collect this information, they are all measures of activity reminiscent of the type of detailed activity data gathered by Learning Management Systems. Better I think to implement social learning interventions and observe how it impacts standard business results. Social Learning is simply natural human behavior that we happen to have a very intense microscope on at the moment. To evaluate and measure it would suck dry it’s very human elements.
Evaluation should inform decision-making
Evaluation is meant to inform decisions. We should measure what we can and use it in ways that it doesn’t bias what we can’t. The Kirkpatrick approach (and others that have expanded on it over the years), have provided a decent framework to think about what we should expect from training and other informal learning interventions.
However, myopic and overly rigorous measurement can drive out judgment and cause us to start measuring trees and forget about the forest. Thinking about organizational learning as a continuum of possible interventions rather that the abstract dichotomy between formal and informal learning will help us better decide appropriate evaluation strategies matched to the situation. Whew! Maybe we need to evaluate the effectiveness of evaluation :)