Archives for category: Foundations

If you’ve been in the learning business for a while you’ve likely seen a few examples where learning initiatives have simply missed the mark. They didn’t produce the anticipated return on investment.  Planned performance improvement did not materialize. Learners didn’t develop the skills targeted by the program. Or if they did they certainly aren’t being applied on the job. Maybe all of the above.

Failures like these are more common than we like to think. Sometimes they’re not as visible as other more tangible areas of the business like manufacturing defects (Toyota’s sticky breaks) or financial misadventures (take your pick).  When substantial resources have been committed to well intentioned, hard to measure initiatives like training, sometimes success is declared when real evidence suggest otherwise (somehow I’m seeing George Bush on the deck of an aircraft carrier here). The phenomenon is not limited to formal training. Informal learning and I suspect a few recent social learning/social media efforts have met with limited success.

If you’re honest, some of those programs might have even been your own (I know a few of mine have had less than stellar impact).  Or you may have had a role in a larger effort.   Perhaps you designed it, or identified the original need, or delivered it, or managed the implementation effort. Maybe you created an impeccable design that was poorly implemented or visa versa. Learning can derail in many ways.

If we follow our own advice, there is a lot of learning to be had in these partial successes and outright failures. I’m doing a presentation soon titled When Learning Fails: Classic Mistakes and How to Avoid Them. I’m interested in your anecdotes and stories. Have you been involved in a learning program that did not go so well?  Have your observed any train wrecks or near misses?  What caused the issues? What was learned that helped improve your next initiative?

Please post any input in the comments section below or enter a link to a blog post or Tweet to #epiclearningfail. I’ll be reviewing  for the most common causes and learnings.   I’ll summarize in later posts.   Looking forward to your responses.

Update: here is the presentation from National CSTD conference Nov 12. 2012

About half of the formal training provided in organizations is custom developed (the other half are packaged “off-the-shelf” programs).  That’s a lot of training.  Every week  internal learning design teams and their external  partners are heads down at work developing learning programs of every description to help build skills and capability unique to their organizations.  In a knowledge economy,  organization specific knowledge and skill is at the heart of competitive advantage.

Yet organizations often don’t get the strategic bang for their custom learning buck.  We are getting good at producing more training in shorter time periods (rapid!) but not necessarily better training, and we are using technology to reinforce these patterns, not break free from them. Training functions continue to respond to ad-hoc requests and greasing those squeaky wheels.

On Monday June 20th, at 1:00 pm (EST) I am doing a free webinar to discuss ways organizations can get more strategic value from their custom learning initiatives (including informal learning).  Panel guests from two Global Knowledge  clients  (Bell Canada and Service Canada) will participate.  Feel free to join us (it’s free).  Click here to register .

Here are some of the  practices we’ll be discussing.

1. (Really) Link Learning to Business Strategy

  • Business goals are your friend. Use them to support your decisions not to respond to low value ad-hoc requests.
  • Get hooked into the annual planning cycle to truly understand your organizations business strategy
  • Prepare proactive annual learning plans with your customers to  jointly addressing business needs
  • Manage ad-hoc requests professionally

2. Target Signature Competencies that Differentiate your Organization

  •  In today’s knowledge economy organizational capability, skills and knowledge set companies apart and provide real strategic advantage
  • These signature competencies are often driven by key business processes
  • Custom learning will add more value when it focuses on these core competencies and not on lower leverage ad-hoc learning needs
  • Identify pivotal jobs, roles and associated skills. Target custom learning projects squarely  at these all important core competencies

 3. Start at the End

  • Custom learning programs too often start with “content” or subject matter–a sure fire way to produce bloated, dull and low value programs
  • By starting with the performance improvement needed from jobs and roles, custom learning programs can be leaner, more effective and faster to develop.  In fact you may not end up developing training at all.  Performance support, information and informal learning solutions will start to to become obvious choices.
  • Work backwards:  business need –>performance needs–>practice/application –> minimal content
  •  Content and subject matter should be the last decision, not the first

 4. Design with Integrity

  •  We know how to design effective learning programs.  We just usually don’t follow our own advice.  The key factors are practice, application, coaching and feedback (true even for informal learning).
  • In our efforts to meet training volume targets, respond to unplanned requests and meet impossibly short turnarounds we opt for speed, convenience and content “coverage” at the expense of real impact
  • Set design standards that produce high impact learning and stick to them.  That doesn’t mean you can’t be flexible and have different approaches for information requirements vs deep learning requirements.  But it does mean you need to have the knowledge to know the difference and the professional integrity to commit only to the appropriate  solutions
  • Professionalize your team.  Hire people with the skills and track record to produce high impact learning and performance.  Develop those that don’t.  Set high standards.

 5. Get Informal

  •  Formal learning programs are only one way to accomplish learning outcomes.  And they are often the least effective and most costly
  • The majority of learning taking place in your organization right now is through informal learning
  • Tap the full range of learning solutions from informal to non-formal and formal learning to broaden your reach and influence the 80% of learning happening outside the training function
  • Performance support systems, communities of practice, job assignments, structured experience, collaborative learning and learning 2.o solutions are all custom solutions that can have greater strategic impact than a formal training program

6. Innovate with Technology

  • Technology has given us e-learning, automated learning administration (LMS), learning content management and collaborative design (LCMS), mobile learning, assessment tools, and more
  • It has brought efficiencies but not always improved effectiveness or strategic value
  • Web 2.0 and social media are disrupting current views of how technology can and should support learning.  That’s a good thing.
  • Be creative in how you use technology to support learning.  Don’t simply be a servant to it.  Use it as a tool to innovate rather than institutionalize mainstream approaches that don’t add value

 7. Use Partners Strategically

  •  External partners can offer more than a “pair of hands” to design custom learning programs. There are many points in the analysis, design and development stages where external partners can add strategic value to your programs that you may not have thought of
  • Set up partnerships with defined roles for internal players and external partners
  • Encourage knowledge sharing
  • Establish a collaborative project workspace to work and learn together
  • Merge processes to develop a seamless flow for working together

 8. Measure Success

  • If it’s important to develop strategic programs it’s equally important to know if you accomplished your objective
  • To be effective evaluation has to be a part of the plan, not an afterthought.
  • Evaluation does not have to be a complex and time consuming. Use existing business measures as much as possible
  • Consider alternatives to the Kirkpatrick model
  • Don’t measure everything.  Find out what’s important to the business and make that your measurement focus.  If the business itself is lousy as measuring results, you have yet another opportunity to add value

July 10 update:

Here are the slides used for the Webinar mentioned in this post.  You can view a recording of the free webinar here.

The Learning Circuits Blog big question this month is:

How do you respond to the “I want it now!” request from a demanding executive?

They provide the scenario of  a Type A executive with a website open on rapid instructional design prompting the “I want it now” request.  (Hard to imagine i know, and if true presented an excellent “teachable moment” with that executive!).

While “I want it now!” is a common demand on training functions, it’s certainly not unique to us.  Ask the IT, Marketing, or Administration function and you’ll hear the same groans of recognition.  The strategies for dealing with the situation are the same and they have more to do with relationship building and consulting approaches than anything related to how quickly you can throw together a training program to meet the request.

Here are a few strategies that might help.

1. Prevention

The best strategy is a preventative one.   Training functions should have annual plans in place with their internal clients identifying skill and capability development priorities based on the business and functional needs.  The plan is ideally part of the planning cycle of the organization so business needs are “in the air” and being cascaded through the organization on a number of fronts.  For each group you support, the plan could include:

  • strategic and operating goals for the year
  • pivotal roles involved in achieving them
  • skill, knowledge and capabilities required for each role
  • training and learning approaches and programs to be developed/acquired
  • agreement on responsibilities of client and learning function
  • review plan

The joint planning process itself helps build mutual understanding of the requirements for meaningful learning solutions and informal learning regarding those situations when knee-jerk training is not a solution at all.  You may be lucky enough never to get the “I want it now” request at all.  When they do come (and you know they will)  and they deviate from the jointly developed plan, you can legitimately ask what other priorities need to be dropped, and what resources need to be added, in order to fulfill this new request.

2. Rapid performance analysis

Along with your heart rate, the “I want it now” demand should raise your performance antenna.  This is your opportunity to apply the performance analysis process you know well in theory if not in practice.  There may be other root causes at work and you have a responsibility to suss them out.

But that’s not what your Type A executive asked you to do is it?  So this is not the time for a lengthy root cause analysis, but it does justify a rapid performance analysis.   This is why relationship building and consulting skills are as important as process skills for the learning professional.  You will need to muster your knowledge of the organization, your business acumen and the factors that impact performance to quickly ask the right questions to get to the bottom of the “issue”.  Responding to the “I want it now” demand with an analysis is tricky.  Do it well however and you may earn the respect that will avoid future “I want it now” demands.

There are a number of rapid analysis tools available that are based on “performance thinking” approaches.  For years I’ve used my own adaptation of Thomas Gilbert’s PROBE questioning heuristic.  There are many others.

2. Provide the learning “artifact” but fix the real problem

Your rapid performance analysis may indeed point to other root causes.  If your executive is blind to this despite your best efforts, you may need to provide the learning artifact but sneak in the real solution while doing so.

The “I want it now” executive is usually not as specific as you may be on what a learning program actually is.  This frees you to design a “program” that can be a performance solution in the guise of a training program.  But if you’re A type exec wants you to call it that, why not?.  If your program includes an improved feedback system, better information resources and a process fix all communicated through small training session to help employees use these simple performance support tools,  you have fixed the problem and provided a “training program”.  And again, often all of this can be done faster then slogging through the development cycle of a full blown training program loaded up with rigourously defined learning objectives and practice activities.

4. Do it!

Your executive may be dead on.  Don’t discount this possibility.  Things change pretty quickly in business these days. Roles and skills can take unexpected turns to meet emerging business requirements. At least you have an executive that considers the importance of skills and knowledge needs and cares enough to make the call to you,  abrupt as it may be.  If that is the case, then perhaps a rapid solution is exactly what is called for.  A lean program built with basic job aids, performance support tools, and creative information design can often be done even faster than “rapid e-learning”.  Rapid tools are made for this kind of scenario and they can be very useful.  But creative thinking with a laser focus on actual performance requirements might be even faster.

5. Don’t do it!

If your professional judgment tells you that this project would be folly and waste important resources,  or if it you believe it has lass than a 50% chance of success, you have the responsibility to say no.  Be prepared to back up your response and provide alternatives if appropriate.   This is a risk not many take with the type A executive demand.  The upside is intact professional integrity and the knowledge that  you have saved the company some wasted effort.   The downside is…well that’s what keeps life interesting isn’t it :)

Each of these strategies requires strong professional judgment, authentic relationships and sound consulting approaches with your “client”.  These skills include:

  • Contracting
  • Understanding and dealing with resistance
  • Building relationships
  • Providing feedback from analysis
  • Authentic interactions
  • Managing feedback meetings
  • Internal negotiation

In my experience, the best source of guidance for these skills in our profession remains Peter Block’s Flawless Consulting.

So you better get on that.  I want it now!

In my last post I mentioned that I prefer the Success Case Method for evaluating learning (and other) interventions to the Kirkpatrick approach. A few readers contacted me asking for information on the method and why I prefer it. Here’s a bit of both.

About the Success Case Method

The method was developed by Robert Brinkerhoff as an alternative (or supplement) to the Kirkpatrick approach and its derivatives. It is very simple and fast (which is part of it’s appeal) and goes something like this:

Step 1. Identify targeted business goals and impact expectations

Step 2. Survey a large representative sample of all participants in a program to identify high impact and low impact cases

Step 3. Analyze the survey data to identify:

  • a small group of successful participants
  • a small group unsuccessful participants

Step 4. Conduct in-depth interviews with the two selected groups to:

  • document the nature and business value of their application of learning
  • identify the performance factors that supported learning application and obstacles that prevented it.

Step 5. Document and disseminate the story

  • report impact
  • applaud successes
  • use data to educate managers and organization

The process produces two key outputs

  • In-depth stories of documented business effect that can be disseminated to a variety of audiences
  • Knowledge of factors that enhance or impede the effect of training on business results. Factors that are associated with successful application of new skills are compared and contrasted with those that impede training.

It answers practical and common questions we have about training and other initiatives:

  • What is really happening? Who’s using what, and how well? Who’s not using things as planned? What’s getting used, and what isn’t? Which people and how many are having success? Which people and how many are not?
  • What results are being achieved? What value, if any, is being realized? What goals are being met? What goals are not? Is the intervention delivering the promised and hoped for results? What unintended results are happening?
  • What is the value of the results? What sort of dollar or other value can be placed on the results? Does the program appear to be worthwhile? Is it producing results worth more than its costs? What is its return on investment? How much more value could it produce if it were working better?
  • How can it be improved? What’s helping? What’s getting in the way? What could be done to get more people to use it? How can everyone be more like those few who are most successful?

Here’s a good Brinkerhoff article from a 2005 issue of Advances in Developing Human Resources on the method. The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training

There are some important differences between Kirkpatrick Based Methods and the Success Case Method. The following table developed by Brinkerhoff differentiates the two approaches.

Why I like it

Here are five reasons:

1. Where Kirkpatrick (and Philips and others) focus on gathering proof of learning effectiveness and performance impact using primarily quantitative and statistical measures, the Success Case Method focuses on gathering compelling evidence of effectiveness and impact through qualitative methods and naturalistic data gathering. Some organizational decisions require hard proof and statistical evidence. In my experience training is not one of them. At best, training decisions are usually judgment calls using best available information at the time. Statistical proof is often overkill and causes  managers  to look at each other in amusement.  All they really need is some good evidence, some examples of where things are going well and where they aren’t. They are happy to trade statistical significance for authentic verification from real employees.

2. We spend a lot of time twisting ourselves in knots trying to isolate the effects of training from other variables that mix with skills to impact performance. Factors such as opportunity to use the skills, how the skills are supported,  consequences of using the skills and others all combine to produce performance impact. Only we are hell bent on separating these factors. Our clients (internal and external) are interested only in the performance improvement. In the end it is irrelevant to them whether it was precisely the training that produced the improvement. They simply would like some confirmation that an intervention improved performance, and when it didn’t how we can modify it and other variables to make it work. Success case method accepts that other factors are at work when it comes to impact on performance and concentrates on the impact of the overall intervention.

3. The approach can be used for any type of intervention designed to improve performance, including training, performance support systems, information solutions, communities of practice, improved feedback systems, informal and semi-structured learning initiatives and social learning initiatives.

4. Success Case Method results are documented and presented as “stories”. We have learned the power of stories for sharing knowledge in recent years. Why not use the same approach to share our evaluation results instead of the dry and weighty tombs of analysis we often produce

5. It’s fast and it’s simple and has a growing track record.

To learn more:

The Success Case Method: Find Out Quickly What’s Working and What’s No

Telling Training’s Story: Evaluation Made Simple, Credible, and Effective

High Impact Learning: Strategies For Leveraging Performance And Business Results From Training Investments

In a recent article in CLO magazine Dan Pontefract questioned the value of traditional training evaluation and the Kirkpatrick approach in particular (article re-posted here).  The article raised the ire of the Kirkpatrick organization and Dan responded in a follow-up post .  Others had observations on the post  (see  Don Clark and Harold Jarche.) I’ve been involved in many evaluation efforts over the years, both useful and ill-advised, and have some thoughts to impose on you.

To summarize the positions I’ll paraphrase Dan and Wendy Kirkpatrick  (probably incorrectly but this debate happens so often I’m using Dan and Wendy more as archetypal voices for both sides of the argument)

Dan: Learning is a continuous, connected and collaborative process.  It is part formal, part informal and part social.  Current evaluations methods are dated, focused only on formal learning events, and need to be tossed.   (He doesn’t say it but I think he would place less importance on evaluation in the growing world of social learning)

Wendy (Kirkpatrick): Formal training is the foundation of performance and results.  It must be evaluated in measurable terms. Clearly defined results will increase the likelihood that resources will be most effectively and efficiently used to accomplish the mission.  (She doesn’t say it but I think she would suggest social learning, when considered at all, is simply in a supporting role to formal training.)

On the surface it sounds like they couldn’t be more polarized, like much of the current debate regarding formal vs. informal learning. Here are some thoughts that might help find some common ground (which, I’ll admit, isn’t as much fun as continuing to polarize the issue).

Confusing Training and Learning muddies the purpose of evaluation

In the last 10 years or so we’ve moved away from the language of training and instruction, with it’s prescriptive and objectivist underpinnings (boo!) to the softer language of learning, most recently of the social variety (yea!).  Most “training” departments changed their moniker to “learning” departments to imply all the good stuff, but offer essentially the same set of (mostly formal) learning services.  Learning is the new training and this has confused our views of evaluation.

Learning (as I’m sure both Dan and Wendy would agree) truly is something we do every day, consciously, unconsciously, forever and ever, amen.  We are hard wired to learn by  adopting a goal, taking actions to accomplish the goal (making a decision, executing a task, etc) and then making adjustments based the results of our actions.  We refine these actions over time with further feedback until we are skilled or expert in a domain. This is learning.

Training is our invention to speed up this learning process by taking advantage of what has already been learned and freeing people from repeating the errors of others.   In business fast is good.  Training, at least in theory, is the fast route to skilled performance versus the slow route of personal trial and error.  It works very well for some tasks (routine) and less well for others (knowledge work and management development).   Ironically, by stealing training from the hands of managers and from early mentor/apprenticeship approaches we may have stolen its soul (but I digress).

In any case, like it or not, in an organizational setting, training and learning are both means to an end–individual and organizational performance.  And performance provides a better filter to make decisions about evaluating than a focus on training/learning.

Should we evaluate training?

If it’s worth the considerable cost to create and deliver training programs it’s worth knowing if they are working,  even (maybe especially) when the answer is no.  With growing emphasis on accountability it hard to justify anything else.  Any business unit, Training/Learning included, needs to be accountable for effective and efficient delivery of its services.

The Kirkpatrick Framework (among others) provides a rational process for doing that but we get overzealous in the application of the four levels.  In the end, it’s only the last level that really matters (performance impact) and that is the level we least persue.   And I don’t know about you, but I’ve rarely been asked for proof that a program is working.  Senior management operates on judgment and best available data for decision making far more than any rigorous analysis.  When we can point to evidence and linkages in performance terms that our training programs are working that’s all we usually need.  I prefer Robert Brinkerhoff’s Success Case Method for  identifying evidenceof training success (vs. statistical proof ) and for using the results of the evaluation for continuous improvement.

Unlike Dan, I’m happy to hear the Kirkpatrick crew has updated their approach to be used in reverse as a planning tool.  It’s not a new innovation however. It’s been a foundation of good training planning for years.  It puts the emphasis on proactively forecasting the effectiveness of a training initiative rather than evaluating it in the rear view mirror.

Should we evaluate social learning?

It gets slippery here, but stay with me.  If we define learning as I did above,  and as as many people do when discussing social learning, then I think it’s folly to even attempt Kirkpatrick style evaluation.  When learning is integrated with work, lubricated by the conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements.   Learning in the broadest sense is simply the human activity carried out in the achievement of performance goals.  Improved performance is the best evidence of team learning.  This chart from Marvin Weisbord’s Productive Workplaces: Organizing and Managing for Dignity, Meaning and Community illustrates the idea nicely:


In his post Dan suggests some measures for social learning:

“Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism”

Interesting as it may be to collect this information, they are all measures of activity reminiscent of the type of detailed activity data gathered by Learning Management Systems.  Better I think to implement social learning interventions and observe how it impacts standard business results.  Social Learning is simply natural human behavior that we happen to have a very intense microscope on at the moment.  To evaluate and measure it would suck dry it’s very human elements.

Evaluation should inform decision-making

Evaluation is meant to inform decisions. We should measure what we can and use it in ways that it doesn’t bias what we can’t.   The Kirkpatrick approach (and others that have expanded on it over the years), have provided a decent framework to think about what we should expect from training and other informal learning interventions.

However, myopic and overly rigorous measurement can drive out judgment and cause us to start measuring trees and forget about the forest.   Thinking about organizational learning as a continuum of possible interventions rather that the abstract dichotomy between formal and informal  learning will help us better decide appropriate evaluation strategies matched to the situation.  Whew! Maybe we need to evaluate the effectiveness of evaluation :)

I’ve been reading some Henry Mintzberg recently.  His books–Managing and Managers Not MBA’s–both question prevailing thinking on management and leadership and present alternatives for effective management practice and development.  Both books include a model of management as a balancing act between science, art and craft. His argument is that effective management requires all three and an overemphasis on any one results in dysfunction.

I think it also offers some insight to effective Instructional Design.  Much of the recent debate regarding Instructional Design models and practice (see my own view here) seem to revolve around the prescriptive, process based models of ADDIE (and like models) versus  more open constructivist approaches, presumably more relevant for our networked and collaborative work environments.   The arguments tend to get unnecessarily polarized.  The following table is adapted from a similar one Mintzberg created for defined management styles.  I believe it works equally well for for Instructional Design practice.

Most graduate programs in Instructional Design and Educational Technology are squarely in the science column (psychology, human learning, and systems design).  New graduates emerge with a scientific bent seeking order, precise applications and and predictable results from their models and approaches refined in the scientific tradition. We quickly (or perhaps not so quickly) learn from experience (craft) what really works and what doesn’t, and also  that often unexpected creative ideas and insights improves our solutions (art).  Clearly, effective design of learning experiences requires all three.

The diagram below, again adapted from Mintzberg,  shows how these three approaches to learning design might interact and the potential consequences of relying on any one dominant style.  We have all seen examples at the extreme end of each style.   Bringing only an artistic design style to a project may result in a truly novel, creative or  visually stunning result that wows and inspires but does not teach.   Relying on proven learning science often results in dry, uninspired or demotivating instruction that may result in learning, but can be mind-numbing.  Craft, uninformed by art or science, and often from untrained instructional designers working from common sense rarely ventures beyond personal experience, with hit and miss results at best.


Combination of the approaches can also be less than optimal for producing effective learning experiences.  Art and craft together without the systematic analysis of science can lead to disorganized learning designs.  Craft and science without the creative vision of art can lead to dispirited design, careful and connected but lacking flare.  Learning design based on art with science is creative and systematic, but without the experience of craft can produce, impersonal and disconnected learning.

Effective learning designs then,  happen most when that elusive combination of art, science and craft come together. Where the three approaches coexist, through a skillfully assembled learning team the result is usually effective, motivational learning grounded in the realities of the organization.  I suppose a tilt toward one or two would make sense for certain subjects, skills or audiences.   For management,  Mintzberg says too much balance of the three may also be dysfunctional since it lacks any distinctive style at all!  Perhaps, a good lesson for instructional design as well.

flogging-dead-horseI’m at risk of flogging a very dead horse here, but some recent posts from Ellen Wagner (What is it about ADDIE that makes people so cranky?) and Donald Clark (The evolving dynamics of ISD and Extending ISD through Plug and Play) got me thinking about instructional design process and ADDIE in particular (please  don’t run away!).

 

Ellen’s post focused on how Learning Designers on a twitter discussion got  “cranky” at the first mention of the ADDIE process (Analysis, Design, Development, Implementation and Evaluation).  On the Twitter #Lrnchat session  participants had a gag response to the to the mere mention of ADDIE (sound familiar?).  Don responded with some great comments on how ISD (ADDIE) has evolved and adapted.

Much of my career has been involved in applying ADDIE in some form or other and I’ve landed on a conflicted LOVE/HATE relationship with it that that you, lucky reader, will now be subjected to .

addie_model

HATE (Phase A, Step 3.2.6)

Throughout the 90’s many Instructional Designers and e-Learning Developers (me included) grew disgruntled with ADDIE (and its parent process Instructional Systems Design—ISD) as training struggled to keep up with business demands for speed and quality and as we  observed process innovations in software and product development field (Rapid Application Development, Iterative prototyping etc).

In 2001 that frustration was given voice in the seminal article “The Attack on ISD” by Jack Gordon and Ron Zemke in Training Magazine (see here for a follow-up)

The article cited four main concerns:

  • ISD is too slow and clumsy to meet today’s training challenges
  • There’s no “there” there. (It aspires to be a science but fails on many fronts)
  • Used as directed, it produces bad solutions
  • It clings to the wrong world view

I have memories of early projects, driven by mindless adherence to ISD, where I learned the hard way, the truth in each of these assertions.  As an example of what not to do and a guard against blowing my brains out in future projects,  for years I have kept an old Gagne style “instructional objective” from an early military project that would make your eyes burn.

Early ISD/ADDIE aspired to be an engineering model.  Follow it precisely and you would produce repeatable outcomes.  The engineering model assumes a “one best way” and the one best way of the time was grounded in the science of behavioral psychology and general systems theory.  The “one best way” thinking appealed to the bureaucratic style of the times but it couldn’t be more of an anathema to the current crop of learning designers, especially those focused on more social and constructivist approaches to learning.  And they are right.

Another criticism of ADDIE I have parallels Ellen’s comments.  Adherents and crankites alike view ADDIE as an “instructional design” methodology when in fact it should be viewed more as a project management process for learning projects.  Viewing  Instructional Design as synonymous with ADDIE does both a disservice.  There is loads of ID going on inside ADDIE but it is primarily in the Design phase of the process, and it can be much more creative than the original model prescribes.

In the end, the Achilles heel of formal ISD/ADDIE rests in its prescriptive posture and foundation in behavioural psychology.  Behavioural psychology and performance technology–its extension in the workplace–have added greatly to our understanding how to improve human learning at work, but we have learned much since then, and technology has provided tools to both designers and learners that profoundly change the need for a process like ADDIE.

Of course the ADDIE process was (and is) not unique to the learning design profession.  For many years the five broad phases of ADDIE were the foundation for the design of most systems.  Software engineering, product development, interactive/multimedia development are all based on some variation of the model.   Most however have evolved from the linear “waterfall” approach of early models (can’t start the next phase until the previous has been done and approved) to iterative design cycles based on rapid prototyping, customer participation in the process and loads of feedback loops built into the process.  And learning/e-learning is no different.  It has evolved and continues to evolve to meet the needs of the marketplace. Much of the current gag reaction to ADDIE, like that experienced by Ellen, is based on the old waterfall-linear approach and the assumed instructivist nature of the model.  And again the gag is entirely valid.

However, if you can break free from the history, preconceptions and robotic application of ADDIE, you may find room for something approaching…

 

LOVE (Phase B, Step 2.3.7)

I can’t say I ever use ADDIE in its purest form any longer.  For e-learning and performance applications, I prefer processes with iterative design and development cycles that are usually a variation of rapid application development process like this one from DSDM.

DSDM_lifecycle

Or for an example specific to e-learning,  this process from Cyber Media Creations nicely visualizes the iterative approach:

Or for the Michael Allen fans out there, his Rapid Development approach described in Creating Successful e-Learning is very good.  There is a respectful chapter in the book on the ADDIE limitations and how his system evolved from it.

But at the heart of all these processes are the familiar phases of analysis, design, development, implementation and evaluation,  albeit cycling through them many times along the way.

For me ADDIE has become a useful heuristic,  not even a process really, but a framework for thinking,  coaching instructional designers,  and managing learning and e-learning projects.  Many e-learning designers these days are not formally trained in Instructional Design and initially think of it as instructional “writing” more than the holistic and systemic approach at the heart of ADDIE.   Likewise customers and subject matter experts are much easier to work with once they understand the broad project process that ADDIE represents.  For these two purposes alone I am thankful for ADDIE as a framework .  ADDIE has staying power because of it’s simplicity.  Purists will say it has been watered down too much but in many ways that’s what keeps it alive.

ADDIE phases are also a useful way to think about organization design and structure of a learning function.  They are the major processes that need to be managed and measured by most learning functions.  Just think of the functionality of most LMS systems have added since their inception.

In the end ADDIE (and its more current modifications) is probably most valuable because it makes the work of learning design visible. This is an essential feature of productive knowledge work of all kinds.   Almost every learning/training group uses a ADDIE as a start point to design a customized process that can be communicated,  executed,  measured and repeated with some level of consistency.  Equally important in knowledge work is the discipline of continually improving processes and breaking through to better ways of working.  This has resulted in the many innovations and improvement to the ADDIE process since its inception.

SUMMATIVE EVALUATION (Phase E, Step 5.2.3)

I’ve come to believe that the power of ADDIE/ISD lies in the mind and artful hands of the user.  In my experience Rapid Application Development processes can become just as rigid and prescriptive under the watch of inflexible and bureaucratic leaders as ADDIE did.

There’s an intellectual fashion and political correctness at work in some of the outright rejection of ADDIE.  It’s just not cool to associate with the stodgy old process.  Add Web 2.0, informal and social learning to the mix and some will argue we shouldn’t be designing anything.

For the organizations I work with, there is no end on the horizon to formal learning (adjustments in volume and quality would be nice!).  Formal learning will always require intelligent authentic learning design, and a process to make it happen as quickly and effectively as possible. If instead of  the irrelevant old geezer in the corner waving a disapproving finger,  we think of ADDIE more like a character from Mad Men, maybe we can refresh the image a bit.

In a Learning 2.0 world, where learning and performance solutions take on a wider variety of forms and where churn happens at a much more rapid pace, what new skills and knowledge are required for learning professionals?” ASTD Learning Circuits big question for July

The Learning Circuits big question this month is an important one, but there seems to be a few questions embedded in it.  Does it ask what new skills are needed by learning professionals due to a wider variety of learning solutions?…or due to more rapid churn?…or due to the implied technology knowledge of the “learning 2.0 world”?  They are related of course, but they do each point to different skill requirements.  As a result answers to the question so far have been enjoyable but a bit a bit helter-skelter.

Harold Jarche nails the “learning 2.0″ aspects of the question with his update of last year’s excellent Skills 2.0 article–especially from a personal learning perspective.  Nancy White highlights general competencies that would be of value to any knowledge worker in today’s workplace.  Mohamed Amine Chatti identifies knowledge networking and double loop learning as critical. I like those.  But again not necessarily specific to the learning professional.  Natalie Laderas-Kilkenny gets closer to skills that are important for the learning professional and says that a learning culture is an important precursor for successful learning 2.0. Right on!

Michael Hanley layers a business view on the question (thank you!) and charts some necessary skills.  I also like Clive Sheppard’s view that we don’t need to tear up the rule book and start again–that our mission remains (organizational performance) but we need to ramp up more quickly on current technology and methods.  Couldn’t agree more.

What Learning Professional?

The question also lumps “learning professionals” into a single group.  Most large training functions have many specialized roles and their skill requirements vary.  So here’s another layer to the big question discussion based on different slices of the “learning professional” roles that are out there.

Generally, most learning professionals will need a combination of these three skills to thrive in the learning 2.0 world.

  • User level knowledge of web 2.0 tools and their applications
  • Open attitude towards sharing, collaborating, contributing, and personal knowledge management that underlie their effective use.

How these skills take shape in various learning roles will vary by responsibility.    Here are a few:

Instructors

With over 60% of corporate learning still delivered in the classroom (ASTD 2008 State of the Industry report) there are a lot of instructors out there.  They need to develop sophisticated skills in the facilitating, coaching and mentoring using on-line and web 2.0 tools.  As classroom programs are extended or moved into on-line communities and action learning programs (see couching ourselves for a good example) coaching and facilitating skills will be essential.  Since these communities will focus as much on work as on learning, facilitators will also need a serious understanding of their organizations to maintain credibility in these contexts.

New skills:

  • Online facilitation and coaching using web 2.0 and other collaborative tools
  • Action learning coaching
  • Organizational knowledge and experience

Analysts and Performance Consultants

The long and ponderous needs assessment is dead.  Speed is essential.  Web 2.0 tools can help the analyst. First, the social media environments that communities now operate in can be a rich source of performance data to mine for skill and knowledge gaps and to signal when a team needs to bring more focus to capturing learning and knowledge. There are also many useful web 2.0 orientated tools for data gathering and internal “crowd-sourcing” that can be used to collect employee feedback, replace old flipchart voting methods and set priorities.  See UserVoice for example.

Also performance consultants will need to breakdown the traditional “training vs. non-training” solution duo into more nuanced solutions that integrate learning and work. There are powerful levers on the non-training side of that equation than need to be part of the future solution set rather than a casual hand off to another department.

Evaluation takes a different shape in the web 2.0 world as well.  It’s easy to determine performance impact for hard skill programs but the softer learning and knowledge sharing associated with communities and natural learning methods is a bit of a measurement bugaboo.  New ways of measuring learning need to be developed and incorporated into the toolkit.  I think Binkerhoff’s success case method has great promise here.

New skills:

  • Web 2.0 tools for data collection and analysis
  • Broader understanding of non-formal and informal learning solutions when recommending “non-training solutions”
  • Building learning road maps and curriculum design efforts to include social learning activities
  • Success case method for measuring informal learning programs

Relationship Managers

Most large training functions have generalists that maintain relationships with internal client groups to assess high level needs and assemble teams to meet those needs.   I see opportunities for this role to use web 2.0 tools to both maintain their internal client relationships and to share knowledge with the solution end of their training organization (The matrix model I suggest is here). Possibly one happy community? They will often be the initial discussion regarding learning 2.0 and social networking related solutions.

New skills:

  • awareness of the benefits and appropriate use of new web 2.0 tools
  • recognize genuine opportunities for learning communities and social media
  • educate internal clients on the learning advantages of web 2.0 (and shift mindsets away from traditional learning)

Instructional Designers and e-Learning Developers

I’m not a member of the instructional design is dead clan.  But ID pros certainly need to evolve and incorporate more discovery oriented and natural learning benefits of Learning 2.0.  There is no prescription for learning 2.0 designs as there is for e-learning 1.0 which makes some ID’s uncomfortable, but there are certainly principles and best practices that need to be learned by any ID that wants to stay relevant.

Not all are e-learning developers are instructional designers (and visa versa).  With their stronger technical skills, developers need to up their game in the integration of 2.0 and 1.0 technologies to enable more creative solutions.   Rather than defaulting to a rapid development tool for example, e-learning developers need the skill to develop an effective performance support environment, or to use simple tools to create realistic simulations.  Mobile learning is also growing and is an essential developer skill.

New skills:

Learning and Organizational Effectiveness Consultants

More than the technology of web 2.0 it’s the methods of informal and social learning that they support that have the most potential to change organizations.  Learning consultants and OD specialist are at the heart of this.  They need to work together more under a common umbrella. My idea of a learning consultant is more akin to the OD or Organizational Learning professional that get inside the organization and facilitate change and learning through workflow re-design, change management efforts and action learning.

New skills:

Learning Unit Directors and Leaders

Learning department leaders need to provide the resources to develop their team in line with the above.  The learning unit is a team of knowledge workers that must model the solutions they are recommending to their internal clients.  Learning unit leaders need the skills to make this happen.  More than providing web 2.0 tools they need to encourage and participate in their own learning communities.  They also need to manage their unit as a system.  Since their team (like most knowledge workers) will know more than they do about learning and performance, they need to learn how to manage the “system” and provide vision and direction more than the manage the “people”.  That means building workflow, measures and structures with their team that produces real results from the unit.

New skills:

  • manage the learning organization as a system
  • leadership (not micromanagement)
  • resources to model, experiment and innovate new learning approaches with in the learning unit.
  • web 2.0 tools

e-Learning Suppliers and Vendors

e-Learning vendors (authoring tools, LMS/LCMS, consulting services) have started to add social media tools as wrappers for their web 1.0 offerings but there are few native web 2.0 applications and solutions.   Whether we like it or not, the vendor world plays a big role in shaping technology based learning solutions.

Vendors, suppliers and consultants need new skills in recognizing market opportunities and provide solutions that push the envelop.  Where is the content designed for use within learning communities for example, or a community platform for leadership team development?  People are throwing them together using standard open source or proprietary social media tools but it would be nice to see some platforms/service that offer unique service by skill type the way we see traditional training program offerings.

New skills:

  • recognizing new learning 2.0 market opportunities
  • solutions and offerings built on web 2.0 values and platforms
  • external cross-industry learning communities
  • flexible content for role or discipline based communities (ex. Management communities, consulting communities, technical communities etc)

Summary

As Clive Sheppard said, our mission is still improving performance.  Web 2.0 focused discussions like this tend to put technology out front, which is a mistake we’ve made before. Web 2.0 and social media (more importantly the collaboration and sharing they enable) offers us genuine new opportunities for improving performance but they are just one of many important skills needed by learning professionals today.

In a blog post a while back I mentioned it might be nice to see the training function morph into something more akin to an organizational effectiveness unit in the next ten years.  So I enjoyed a  recent post (The Rise of the Chief Performance Officer) by knowledge management leader Tom Davenport where he suggests merging organizational groups that share performance improvement as their mission but come at it from different vantage points and methodologies.  He cites a recent meeting of his knowledge management research group:

..we advocated for merging knowledge management with some other function – most likely the human resources/organizational learning/talent management constellation. We felt that knowledge management groups don’t often have the critical mass to stand alone, and knowledge and learning are very similar concepts anyway.

He continues:

... if you’re going to be merging things, you might as well go a bit further… if you want to align knowledge and learning with work, you need to know something about business processes and how to improve them. And if you’re going to align processes with the content needed to perform them effectively, you need to know something about the technology that would deliver the content in accordance with job tasks.

In the end, he suggests a merged unit with a Chief Performance Officer at the helm.

Many organizations have separate departments in these (and other) disciplines, all sharing the mission of impacting organizational performance.

  • Learning and Development
  • Organizational Development
  • Process Improvement (Quality)
  • Human Resources/Talent management
  • Knowledge Management

Sometimes there are sub-departments within these (for example a performance technology group within Learning and Development).

I’ve had the chance to work with many of these groups and use their methods to improve performance (sometimes on the same performance issue!).  While the approaches of each group are very effective in the right situation, each group tends to see their solutions as “best” or are blindly unaware of the methods developed in sister disciplines.  This competitive and silo thinking rarely results in optimal solutions and can confuse line managers with the array of “performance improvement” solutions to their issues.

Even though learning professionals are trained to analyze performance issues to identify “learning or “non-learning” solutions,  the “non-learning” catch-all is usually the less comfortable road than the learning solution.  Likewise, for a time, everything in the organizational development arsenal seemed to involve team building, and for the Quality department every process required “re-engineering” without regard to the people working in those processes.  In recent years, most performance improvement groups have learned that their solutions are much richer and more effective when enhanced by the perspective of others.

Bringing these organizations all under one roof could accelerate this cross fertilization of ideas, result in innovative new approaches and reduce the redundancy and confusion that exists for line managers. A reasonably neutral label for this organization might be “Organization Effectivness”.  Performance Improvement, Performance Effectiveness, Performance Development are also candidates I’ve heard tossed around.  Of course, the label is less important than how the organization is designed and the services it provides.

Designing the Organization Effectiveness Function

There are potential models for designing such an organization.   For example, an article (Redesigning the HR organization) by organizational design specialist Amy Kates describes a matrix organization structure that could support the complexity of a merged performance improvement unit.   Her award winning model targets HR but I see many useful features for supporting an even broader organization.  The model includes:

Customer Relationship Managers

  • front end customer facing team)

Centres of Excellence

  • back end expert teams or networks that cross performance improvement disciplines

Solutions Teams

  • Multidisciplinary teams that are configured to mirror the complexity of the work rather than the business hierarchy

Here is Amy Kates HR orientated Model from her article:

Here is the model re-illustrated from the broader Organizational Effectiveness view:

Organizational Effectiveness as Internal Management Consulting

Another possible organizational model is the professional services firm or management consulting company.   It would be possible to organize the unit as an internal consultancy of sorts modeled on the multidisciplinary focus of many of the large consulting companies.  Toyota for example has modeled their internal learning and lean process improvement services using a management consulting model.  This article (Decoding the DNA of the Toyota Production System) describes how Toyota’s Operations Management Consulting Division (OMCD) provides learning and lean process improvement services inside and outside the organization through a management consulting model.

Benefits and Risks

Creating and managing and organizational effectiveness department is not without risks.  The centralization required could result in bureaucratization (although the solution teams and dedicated business partners of the matrix model guards against that),  bringing together professional groups that have operated independently for many years could result in internal conflict and of course deciding who’s in and whose out could be interesting.  But the potential benefits make the idea worth exploring.  Among the benefits I would include:

  • Performance improvement solution innovations
  • Resource efficiencies
  • Improved service and single point of contact to line management
  • Cross-fertilization of approaches and methods
  • More strategic performance improvement efforts
  • Avoid political dominance of single groups
  • Professionalization of performance improvement services
  • More innovative uses of technology for performance improvement.

Not long ago I was contracted to manage the development of an e-learning course using the Articulate rapid development tool.  During that time I came across Tom Kuhlmann’s Rapid e-learning blog and used it frequently for ideas and tips on getting the most out of Articulate.   If you’re not aware of it, the blog presents weekly tips on the development of e-learning using rapid development tools (usually Articulate).  Tom’s tips are always very creative and useful but they have typically reinforced more traditional e-learning design.  Usually that means a simple module structure something like the following:

  1. Present objective
  2. Present content
  3. Provide practice
  4. Assess objective

After my post on using authentic learning tasks in learning design, a colleague called my attention to a recent Rapid e-Learning Blog post “Are Your e-Learning Courses Pushed or Pulled?”.  The post represents a nice shift in direction away from e-learning convention.  In it Tom concisely describes how traditional e-learning courses “push” content to the learning when we really should be creating real world, authentic (my emphasis) problems and tasks requiring learners to pull learning content from a number of sources.

It’s a simple concept that can have a big impact.  But it does require a bit more thought and task analysis on the front end for instructional designers.  If e-learning designers began to take even the smallest of steps in this direction, the effectiveness of e-learning courses, and their acceptance by employees would improve dramatically.

Building e-learning content around tasks and problems improves relevance and motivation and with the right planning can decrease the cost of development.  The learning resources being “pulled” by learners to solve problems do not all need to be expensive highly interactive, media rich e-learning resources.  They can be simple documents, pod casts, web pages, simple videos, and short interactive objects that all contribute helping a learner solve a task.  I recently wrote an article for SPBT’s FOCUS magazine (Society for Pharmaceutical and Biotech Trainers) on a similar theme of simplifying e-learning while increasing it’s impact.  I don’t have a down-loadable version of the article but my pharma community readers will find the article in the Spring 2008 issue of the journal.  The article is titled Back to the Future: A Flexible and Cost effective approach to DPK e-Learning.

Tom doesn’t label the “pull” e-learning approach as “problem based learning using authentic tasks”, but that’s really what it is.   In fact, the “task” he uses in his example (installing crown molding) is a great example of an authentic learning task.  Here is how Tom Kuhlmann visually represented the difference between traditional (push) e-learning and problem or problem based (pull) e-learning.

I might put a task/problem at the centre of the “pull” illustration, but the message is the same–use simple problem and task driven frameworks to improve e-learning effectiveness, relevance and learner motivation. In doing so you can also allow yourself to finally apply your underused instructional design talents!

The most recent Rapid e-Learning blog presents 7 tips for creating better scenario based e-learning.  Sounds like a continuation of the theme.  Go Tom.

Follow

Get every new post delivered to your Inbox.

Join 338 other followers