The Myth of e-Learning Levels of Interaction

For years the eLearning industry has categorized custom solutions into three or more levels of interactivity– from basic to complex, simple to sophisticated.  The implication also being that that learning effectiveness increases with each higher level of interactivity.

You don’t have to look hard to find them:

Levels of Interactivity in eLearning: Which one do you need?

CBT Levels

How Long Does it Take to Create Learning?

These kinds of categorizations surely originated in the vendor community as they sought ways to productize custom development and make it easier for customers to buy standard types of e-learning.  I won’t quibble that “Levels of interactivity” has helped simplify the selling/buying process in the past but it’s starting to outlive it’s usefulness.  And they are a disservice to intelligent buyers of e-learning.  Here’s why: 

1. The real purpose is price standardization

Levels of interactivity are usually presented as a way to match e-learning level to learning goals.  You’ve seen it–Level 1 basic/rapid is best for information broadcast, level 2 for knowledge development and level 3 and beyond for behaviour change or something to that effect.  However, in reality very simple, well designed low end e-learning  can change behavior and high end e-learning programs can wow while provide no learning impact whatsoever.

If vendors were honest the real purpose of “levels of interactivity” is to standardize pricing into convenient blocks to make e-learning easier to sell and purchase.  “How much is an hour of level 2 e-learning again?  OK, I’ll take three of those please”.  each level of e-learning comes with a pre-defined scope that vendors can put a ready price tag on.

It’s a perfectly acceptable product positioning strategy, but it’s not going to get you the best solution to resolving your skill or knowledge issue.

 2. Interactivity levels artificially cluster e-learning features and in doing so reduce choice

Most definitions of what features are included in each of the “levels” are vague at best.  Try this definition from Brandon Hall in wide use:

bh levels

It’s hard to imagine looser definitions of each level.  In fact there are a variety of factors that drive scope and effort (and therefore price) in a custom e-learning program. And they go well beyond “interactivity”.  They include interface choices, type and volume of graphics and media, instructional strategies, existence and quality of current content, and yes, the type and complexity of user interactions.

Each of these features has a range of choices within them, but the levels of interactivity approach tends to cluster everything into one bucket and call it a level.  It might look something like the following:

Levels of e-Learning

A Level 1 program is essentially a template based page turner with a few relevant (or maybe not so relevant) stock images and interactivity limited to some standard multiple choice self checks.   In contrast, a Level III program is loaded with custom designed interface, user controls, media and graphics, along with with complex interactions assumed to be required for simulations and scenarios.   Level 2 is the middle choice most buyers and vendors alike are happy to land on.  None of these choices, by the way, has anything to do with accomplishing a desired learning outcome–but that’s another discussion.

If this artificial clustering of features was ever true, it’s not any longer.  Advanced simulations and scenarios can be created with very basic media and user interface features.  Advanced custom interface and controls with rich custom media are often used to support simple information presentation with very little interactivity.  Powerful scenario based learning can be created with simple levels of interactivity.   Rapid e-learning tools once relegated to the Level 1 ghetto, can create quite advanced programs and custom HTML/Flash can just as easily churn out  page turners.   Out-of-the box avatars can be created in minutes.

This clustering of features into three groups gives you less choice that you would receive at your local Starbucks.  If I’m a customer with a learning objective that is best served by well produced video and animations followed by an on the job application exercise,  I’m not sure what level I would be choosing.   A good e-learning vendor will discuss and incorporate these options into a price model that matches the customer requirement.

3. It reduces effectiveness and creativity

Forcing solutions into one of three or four options stunts creative thinking and pushed the discussion towards media and interactivity rather than closing a skill gap where it should be.

4. It hurts informed decision making

It appears to create a common language but actually reinforces the myth that there are only three or four types of e-learning.  The combinations of media, interactivity and instructional approaches are as varied as the skill and knowledge gaps they are meant to address.

5. It encourages a narrow vision of e-learning

e-Learning has morphed into new forms.  Pure e-learning is already in decline being cannibalized by mobile and social learning and the glorious return of performance support.  These approaches are much more flexible and nimble at addressing knowledge and skill gaps.

Flipped Classroom – A Student Perspective

My son is in a graduate program in Medical Physics at the University of Toronto.  I sent him this recent article from the Atlantic on the concept of the flipped classroom (in higher education).

The Post-Lecture Classroom: How Will Students Fare?

lead_large

He sent a thoughtful email response which was interesting from a student perspective (in addition to seeing more words from him than “send money”!)  He just finished reading Quiet by Susan Cain and I like the connection he makes here.

Good article dad. I’d be interested to actually read the research article  myself, to look at sample sizes and statistical analysis showing their confidence intervals for the data and how statistically significant it is. I think a push towards new and improved teaching techniques is awesome, but you have to be really cautious with it. They mentioned that one year there were student presentations with discussions led by those students, which I have had several courses implement, and it’s a pretty mixed bag. At their worst, those presentations were students presenting the material in just as much of a dry, “Powerpoint poisoning” kind of way as possible. In one class, they were a bit more successful because the prof really helped structure the discussions so we all got a lot out of it.
 
I also have experience with the technique of having us read the book before class and answer clicker questions during lecture, to root out common misconceptions. There were online quizzes before lecture (very simple) on the readings to make sure we did them. I think that this was a VERY effective technique, and made me feel much more engaged in class, and definitely succeeded at giving the professor the information they needed in order to teach the class well, addressing problems that were common in the class.
 
I’m also uncertain about the whole discussion and collaboration in class. That CAN be a good way of engaging the class, but having read “Quiet” by Susan Cain, it definitely seems to fall squarely into this push towards an “extrovert ideal” in education, where it is really designed to most benefit those with more extroverted personalities, and can actually lead to less creativity and innovation, since the consensus in groups will be built by those with the loudest opinion, not necessarily the most informed one. That doesn’t mean this type of teaching isn’t without merit, but I think it should be used with caution, and not be made the centre of the curriculum.
 
Lastly, I think that it’s always important to consider what is actually being taught in these kind of studies, and not apply it to other teaching subjects without due consideration. Graduate Pharmacology seems like a perfectly suited topic for this kind of topic, since everyone is aware of the foundations of the subject, and can focus discussion on “higher-level” stuff like clinical trials. I can tell you that, in my experience, not all attempts to deviate from standard lectures are successful, and there are times, especially at a foundational level, where I personally think that it’s much more beneficial to acquire the knowledge in a more traditional lecture setting, even if it’s less engaging.

Learning Failure in 7 Easy Steps

Picture1We all make mistakes. We know better, but we follow old ways or accept cultural practices that don’t work. There are patterns that produce successful projects and those that lead to failure (see the project death march). I did a recent presentation on the classic mistakes we make in the planning, design and implementation of learning (and how to avoid them). I finished the session with a with a tongue-in-cheek 7 step prescription for a failed learning initiative. Follow them carefully for guaranteed failure.

Step 1: Ensure your program is not connected to a business or performance need

A critical first step.  If your learning initiative in any way contributes to individual or organization performance you’re risking success.  Market your program as urgent and essential to convince your audience they need it (while you’re at it, it’s best if you can convince yourself too).  You then have the option to bury the program somewhere deep in your corporate training catalog or roll it out aggressively as a mandatory program.  Both tactics are equally effective at failing dismally.

Step 2: Rely on your fave management training vendor for program content 

Some say learning programs should be driven by real job or role requirements–that the context in which people work should be the source for learning content.  Don’t believe it.  Instead, close your door, pick up the phone and call your favourite training vendor.  Ask them what training you should be providing this year.  They will have the answer you need and a program with all sorts of great content ready to go.  Another option would be to simply gather as much content as you can from the internet.  Have you seen how much information is out there!

Step 3: Choose a solution that suits you rather than your learners 

There’s so many ways to deliver and support learning now.  Gone are the days where your only option was a classroom training program.  You probably have your favourite.  Trust your gut.  Go with that favourite.  It will be more interesting for you.  Just be sure your choice is detached from the preferences of your audience.

Step 4: Load up on information. Make no provision for practice

Information driven learning is one of the best strategies for failure we know of. Designing practice is hard.  Even harder to design practice that works–on the job activities that develop skill in context. So don’t sweat it. There are great examples out there of power point driven classroom learning, “click next” e-learning, and social learning that’s all about sharing information without actually using any of it. Mimic those examples and you’ll get closer to putting the nail in the coffin of your failed project. But your not quite there yet.

Step 5: Let technology drive your solution

Technology is hip. And they tell us it’s needed to capture the attention of the digital natives entering your organization. So keep it cool. Your program must incorporate the most current devices and tech tools–tablets, smartphones and web 2.0 apps. Don’t think about how they support the objectives of your initiative.

Step 6: Boldly assume the learning will be used on the job

Your program is that good!  It will be more powerful than the current barriers, lack of support and reinforcement that learners will encounter when they are back at work.  Mastery was your goal and no refinement of those skills on the job will be necessary.  Really.

Step 7: Develop your program in a client vacuum

Above all, do not partner with your internal customers to identify business needs or develop a plan to support them through formal or informal learning.  One of the best predictors of success is working collaboratively with your client through the planning, design and development of your program.  Avoid at all costs.  Failure Guaranteed. You’re welcome.

its only a failure quote

Epic Learning Fail

If you’ve been in the learning business for a while you’ve likely seen a few examples where learning initiatives have simply missed the mark. They didn’t produce the anticipated return on investment.  Planned performance improvement did not materialize. Learners didn’t develop the skills targeted by the program. Or if they did they certainly aren’t being applied on the job. Maybe all of the above.

Failures like these are more common than we like to think. Sometimes they’re not as visible as other more tangible areas of the business like manufacturing defects (Toyota’s sticky breaks) or financial misadventures (take your pick).  When substantial resources have been committed to well intentioned, hard to measure initiatives like training, sometimes success is declared when real evidence suggest otherwise (somehow I’m seeing George Bush on the deck of an aircraft carrier here). The phenomenon is not limited to formal training. Informal learning and I suspect a few recent social learning/social media efforts have met with limited success.

If you’re honest, some of those programs might have even been your own (I know a few of mine have had less than stellar impact).  Or you may have had a role in a larger effort.   Perhaps you designed it, or identified the original need, or delivered it, or managed the implementation effort. Maybe you created an impeccable design that was poorly implemented or visa versa. Learning can derail in many ways.

If we follow our own advice, there is a lot of learning to be had in these partial successes and outright failures. I’m doing a presentation soon titled When Learning Fails: Classic Mistakes and How to Avoid Them. I’m interested in your anecdotes and stories. Have you been involved in a learning program that did not go so well?  Have your observed any train wrecks or near misses?  What caused the issues? What was learned that helped improve your next initiative?

Please post any input in the comments section below or enter a link to a blog post or Tweet to #epiclearningfail. I’ll be reviewing  for the most common causes and learnings.   I’ll summarize in later posts.   Looking forward to your responses.

Update: here is the presentation from National CSTD conference Nov 12. 2012

Four Learning Traps to Avoid

The Learning End GameTrap

Perhaps you’ve re-committed to improve learning as the mission of your department (or next big initiative, or…).  It’s well meaning but can be self defeating (or worse, self-fulfilling). The term leaves the impression that learning is the end game, your raison d’être. The real end game is performance; individual and organizational, defined in terms the business uses to measure itself. Sure, you don’t have control over all the factors that influence performance, but that doesn’t mean your solutions can’t be intimately connected to them. Thinking performance first is liberating and opens up whole new perspectives on the types of solutions you can and should be bringing to the table.

Antidote to the end game trap:  Performance Thinking (Cathy Moore and Carl Binder have nice methods for deriving learning from performance needs)

The Planning Trap

I used to believe in the perfect annual plan all wrapped in MBO goodness, aligned and linked to organizational objectives. But over time I’ve come to two conclusions. First, the plans are rarely fully realized. The more interesting innovations and strategies emerged from responses to opportunities throughout the year. Second, senior teams rarely have their act together enough to create strategies and business plans that are meaningful enough to wrap a good training plan around. Highly analytic planning processes can deceive us into thinking we are planning strategically and improving future organizational performance.

To borrow an argument from Henry Mintzberg, strategy is actually a pattern embodied in day to day work more than an annual plan. Strategy is consistency in behaviour, whether or not intended. Formal plans may go unrealized, while patterns may emerge from daily work. In this way strategy can emerge from informal learning. I’ve always liked this image of planning from Mintzberg:

from Henry Mintzberg “The Rise and Fall of Strategic Planning” (1994)

Antidote to the planning trap:  Beware the best laid plans. Go ahead and work with your business units to create a simple training plan linked to whatever business plans they may have in place. But have a rock solid process in place to respond to the requests that will inevitably come that are not in line with the plan. Be ready to develop solutions to adapt quickly to whatever white water your company or industry happens to be swimming in. Be nibble and flexible in response to business changes. Watch for patterns and successes in that work and incorporate them in your training strategy.

The Measurement and Evaluation trap

Evaluation is a hot button that causes more wringing of hands and professional guilt than it should. Evaluation is meant to inform decisions. Some types training are simply easier to measure than others. Everything can be measured, but not everything is worth measuring. When you do evaluate use business metrics already in use and explore methods focused more on collecting evidence of success rather than definitive proof. Myopic and overly rigorous measurement drives out judgment and causes us to start measuring trees and forget about the forest. Our attempts at evaluation are often disproportionate to evaluation elsewhere in the organization (we only think everyone else knows their ROI).

Antidote to the measurement trap: Don’t emphasize short term ROI or cost reduction measures at the expense of true investment in the future that do not have immediate and calculable ROI. When you do evaluate use existing measures that the business uses to judge success.

The Technology Trap

We seem to be hard wired to line up enthusiastically behind each new wave of technology. Each wave has left behind tools and innovations that changed learning for the better (and also, embarrassingly, for the worse). It offers increasingly wondrous ways to improve access to learning, immerse employees in true learning experiences, share learning in collaborative spaces and extend the tools we use to do our work. And it offers equally wondrous ways to create truly disastrous learning experiences.

Antidote for the technology trap: Understand and embrace technology, especially game changing social media, but protect yourself from panacea thinking and keep your eye on the prize of improved performance.  Success lies in the design not the technology.

Everyday Experience is Not Enough

A core tenet of informal and social learning is that we learn through experience. It’s the elephant in the 70-20-10 room. It’s often used as an admonishment to formal learning. Advocates of the most laissez-faire approaches informal learning suggest that given the right tools (social anyone?) employees will do just fine without all the interference by the learning department, thank you very much.

No one in their right mind would argue that experience is not a powerful teacher, or that our most valuable learning occurs while working. But it’s pretty broad generalization don’t you think? Some experiences must be more valuable than others for achieving learning and performance goals. And if so, what makes those experiences more valuable and how do we know them when we see them? Or, from the perspective of the learning professional, how can we help create the right experiences to help people develop their skills? These seem to be important questions if we are to get beyond loose approaches to informal learning.

Indeed research in developing expertise has shown that not all experience is created equal. Years of experience in a domain does not invariably lead to expert levels of performance. Most people after initial training and a few years of work reach a stable, acceptable level of performance and maintain this level for much of the rest of their careers. Contrast that with those that continue to improve and eventually achieve the highest levels of expertise. It seems that where high performers may have 20 years experience , average performers seem to have 1 year of experience 20 times!

The following chart from the body of research on developing expertise, illustrates the results of different types of “experience” on workplace performance.

Ericsson K.A., "The Influence of Experience and Deliberate Practice on the Development of Expert Performance” The Cambridge handbook of expertise and expert performance (2006)

Average performers learn just enough from their environment (experience) to perform everyday skills with a minimal amount of effort. In contrast, experts continually challenge their current performance and seek feedback from their environment to stay in a more or less permanent learning state, mastering everyday skills but continuously raising their personal bar. This deliberate approach to learning from experience is what separates top performers from the norm. Continuously challenging current skills is hard work and it takes it’s toll. Some decrease or stop their focus on deliberate practice and never achieve the excellence of the expert (arrested development).

Designing experience

So, performance does not improve simply through cumulative everyday experience gained face to face, using social media or otherwise. It requires targeted effortful practice an environment rich in accurate and timely feedback. That does not mean formal training.  It does means experience designed and targeted to develop skills and expertise. This is a very different thing than routine, everyday work experience.

Some of the best learning approaches that work well in helping people challenge their current skill levels fall into that fuzzy middle ground between formal and informal learning (see this post for a continuum of learning experiences) and can include the following:

Designing, fostering and supporting work experiences that develop expertise is an emerging role for the learning professional. That role is to assure that people are working in a setting where they can challenge and develop their knowledge and skills. You can’t make them learn but you can help surround them with the resources they need to learn. This approach to learning is truly a partnership between the individual, their managers and you as a learning professional. In doing that work you are practicing and developing your own expertise.

Moving Practice to Centre Stage

As digital content becomes more prevalent (free and otherwise), there’s much talk about the new role of the learning professional as content curator or content strategy developer.

I agree this will be an important role, but worry it yet again puts the focus on structuring and controlling all that information (another round of knowledge management anyone?) while minimizing the critical role of practice and application of the “content”. We all know the importance of practice and feedback in the progression of knowledge to performance (we do all know that, don’t we?). If we truly believe it, then we need to put the design of practice and feedback at the centre of our work, and content (information) in a supporting role. This simple change in vantage point has the potential to radically change the way we approach learning and performance.

Organizing the learning function around practice (vs. courses and content)

What if the learning function was structured around the design and management of practice centres (virtual and physical), rather than the design and delivery of formal training events? It could once and for all move us away from formal event based learning to process oriented learning. The activities in each practice centre would vary by the type of skill being developed. Practice centres to support management and knowledge work for example (simulations, problem solving, cognitive apprenticeships) would look much different than that those supporting procedural and task oriented work (performance demonstrations, skill development). I explored this approach applied to management development in a previous post

What would be different?

Designing practice centres would require us to establish standards (ideally in collaboration with the people doing the work), derive authentic problems and tasks that help people achieve those standards, scaffold practice exercises in a progression towards expertise in the job/role and source and manage the “content” that will help employees make their way through the practice exercises. The framework puts practice in the centre and moves content to a supporting (but critical) role. You might think of the approach as Cathy Moore’s Action Mapping applied at the organizational level (rather than at the course level).

Separating content from practice

Traditional instructional design tightly connects information presentation (electronic or otherwise) with practice in structured learning events.  However, separating content from practice is positive and liberating (no matter what your ID traditionalists tell you) as long as practice does not get lost. Learning functions centred around the design of progressive practice would ensure that.

In the right context Web 2.0 and social learning can beautifully separate content and application.  Other times it can result more in  information dissemination.  Knowledge is an inert thing without application and consuming information is no substitute for true learning. Much of that awesome user generated content out there focuses on informing and much less on doing (thus the calls for content curation).  When social learning encourages sharing,  thinking, collaborating, and real world application as it does in an excellent community of practice, it fits well into the definition of practice I’m suggesting.

The role of Deliberate Practice in the development of expertise.

In preparation for an upcoming presentation on designing practice to improve performance, I’ve been reading much of the excellent source research on the role of deliberate practice in developing expert performance (popularized recently in well known business books). It’s sparked some ideas on how we might manage the shift I’m suggesting above.

If the research on deliberate practice has taught us anything it’s that developing expertise is a long term proposition (about 10,000 hours depending on who you believe).  One-off practice exercises built into formal training events only introduce employees to the “feel” of a skill and in no way produces the expertise needed in the modern workplace. If work performance is important and effective practice is a proven way of getting there, we should take it seriously enough to get it right.

I’ll explore the application of deliberate practice to various types of learning in my next few posts. In the meantime here are 10 ideas from a previous post that just scratch the surface on how Learning Professionals can use “deliberate practice” to improve workplace skill and performance.

  1. Move from “mastery learning” to designing practice with feedback over longer periods of time (from learning events to a learning process). Deliberate Practice differs from the concept of ‘Mastery Learning” at the heart of much instructional design. Mastery learning assumes a skill is perfected (or at least brought to a defined standard) in a fairly short period of time often within the scope of a single course. The complex professional skills of modern knowledge workers and managers demand a stronger focus on long term practice and feedback and building learning around long term objectives.
  2. Develop the person. Time, practice and individualized feedback imply a long term focus on individuals rather than on jobs or roles.
  3. Informal learning efforts like action learning, coaching and are cognitive apprenticeships are critical but they must be focused on practice and immediate feedback and extend over long periods of time.
  4. Relevant, frequent and varied practice must be the dominant and most important element in all formal training programs.
  5. Practice opportunities must extend far beyond initial training programs, to allow people to hone their skills through experimentation with immediate feedback.
  6. Create practice sandboxes and simulation centres for key organizational skills where people can practice their skills and experience immediate feedback in safe environment.
  7. Design visual feedback directly into jobs so professional can immediately see the results of their work. In this way working IS deliberate practice.
  8. Turn training events into the first step of a learning journey that will continue to provide opportunities to practice and refine skills throughout a career.
  9. Identify the interests and strengths of people nurture them through opportunities for deliberate practice. Provide resources and support that encourage early effort and achievement.
  10. Ensure social media environments provide opportunities for coaching and mindful reflection on performance.

Instructional Design: Science, Art and Craft

I’ve been reading some Henry Mintzberg recently.  His books–Managing and Managers Not MBA’s–both question prevailing thinking on management and leadership and present alternatives for effective management practice and development.  Both books include a model of management as a balancing act between science, art and craft. His argument is that effective management requires all three and an overemphasis on any one results in dysfunction.

I think it also offers some insight to effective Instructional Design.  Much of the recent debate regarding Instructional Design models and practice (see my own view here) seem to revolve around the prescriptive, process based models of ADDIE (and like models) versus  more open constructivist approaches, presumably more relevant for our networked and collaborative work environments.   The arguments tend to get unnecessarily polarized.  The following table is adapted from a similar one Mintzberg created for defined management styles.  I believe it works equally well for for Instructional Design practice.

Most graduate programs in Instructional Design and Educational Technology are squarely in the science column (psychology, human learning, and systems design).  New graduates emerge with a scientific bent seeking order, precise applications and and predictable results from their models and approaches refined in the scientific tradition. We quickly (or perhaps not so quickly) learn from experience (craft) what really works and what doesn’t, and also  that often unexpected creative ideas and insights improves our solutions (art).  Clearly, effective design of learning experiences requires all three.

The diagram below, again adapted from Mintzberg,  shows how these three approaches to learning design might interact and the potential consequences of relying on any one dominant style.  We have all seen examples at the extreme end of each style.   Bringing only an artistic design style to a project may result in a truly novel, creative or  visually stunning result that wows and inspires but does not teach.   Relying on proven learning science often results in dry, uninspired or demotivating instruction that may result in learning, but can be mind-numbing.  Craft, uninformed by art or science, and often from untrained instructional designers working from common sense rarely ventures beyond personal experience, with hit and miss results at best.


Combination of the approaches can also be less than optimal for producing effective learning experiences.  Art and craft together without the systematic analysis of science can lead to disorganized learning designs.  Craft and science without the creative vision of art can lead to dispirited design, careful and connected but lacking flare.  Learning design based on art with science is creative and systematic, but without the experience of craft can produce, impersonal and disconnected learning.

Effective learning designs then,  happen most when that elusive combination of art, science and craft come together. Where the three approaches coexist, through a skillfully assembled learning team the result is usually effective, motivational learning grounded in the realities of the organization.  I suppose a tilt toward one or two would make sense for certain subjects, skills or audiences.   For management,  Mintzberg says too much balance of the three may also be dysfunctional since it lacks any distinctive style at all!  Perhaps, a good lesson for instructional design as well.

ADDIE is dead! Long live ADDIE!

flogging-dead-horseI’m at risk of flogging a very dead horse here, but some recent posts from Ellen Wagner (What is it about ADDIE that makes people so cranky?) and Donald Clark (The evolving dynamics of ISD and Extending ISD through Plug and Play) got me thinking about instructional design process and ADDIE in particular (please  don’t run away!).

 

Ellen’s post focused on how Learning Designers on a twitter discussion got  “cranky” at the first mention of the ADDIE process (Analysis, Design, Development, Implementation and Evaluation).  On the Twitter #Lrnchat session  participants had a gag response to the to the mere mention of ADDIE (sound familiar?).  Don responded with some great comments on how ISD (ADDIE) has evolved and adapted.

Much of my career has been involved in applying ADDIE in some form or other and I’ve landed on a conflicted LOVE/HATE relationship with it that that you, lucky reader, will now be subjected to .

addie_model

HATE (Phase A, Step 3.2.6)

Throughout the 90’s many Instructional Designers and e-Learning Developers (me included) grew disgruntled with ADDIE (and its parent process Instructional Systems Design—ISD) as training struggled to keep up with business demands for speed and quality and as we  observed process innovations in software and product development field (Rapid Application Development, Iterative prototyping etc).

In 2001 that frustration was given voice in the seminal article “The Attack on ISD” by Jack Gordon and Ron Zemke in Training Magazine (see here for a follow-up)

The article cited four main concerns:

  • ISD is too slow and clumsy to meet today’s training challenges
  • There’s no “there” there. (It aspires to be a science but fails on many fronts)
  • Used as directed, it produces bad solutions
  • It clings to the wrong world view

I have memories of early projects, driven by mindless adherence to ISD, where I learned the hard way, the truth in each of these assertions.  As an example of what not to do and a guard against blowing my brains out in future projects,  for years I have kept an old Gagne style “instructional objective” from an early military project that would make your eyes burn.

Early ISD/ADDIE aspired to be an engineering model.  Follow it precisely and you would produce repeatable outcomes.  The engineering model assumes a “one best way” and the one best way of the time was grounded in the science of behavioral psychology and general systems theory.  The “one best way” thinking appealed to the bureaucratic style of the times but it couldn’t be more of an anathema to the current crop of learning designers, especially those focused on more social and constructivist approaches to learning.  And they are right.

Another criticism of ADDIE I have parallels Ellen’s comments.  Adherents and crankites alike view ADDIE as an “instructional design” methodology when in fact it should be viewed more as a project management process for learning projects.  Viewing  Instructional Design as synonymous with ADDIE does both a disservice.  There is loads of ID going on inside ADDIE but it is primarily in the Design phase of the process, and it can be much more creative than the original model prescribes.

In the end, the Achilles heel of formal ISD/ADDIE rests in its prescriptive posture and foundation in behavioural psychology.  Behavioural psychology and performance technology–its extension in the workplace–have added greatly to our understanding how to improve human learning at work, but we have learned much since then, and technology has provided tools to both designers and learners that profoundly change the need for a process like ADDIE.

Of course the ADDIE process was (and is) not unique to the learning design profession.  For many years the five broad phases of ADDIE were the foundation for the design of most systems.  Software engineering, product development, interactive/multimedia development are all based on some variation of the model.   Most however have evolved from the linear “waterfall” approach of early models (can’t start the next phase until the previous has been done and approved) to iterative design cycles based on rapid prototyping, customer participation in the process and loads of feedback loops built into the process.  And learning/e-learning is no different.  It has evolved and continues to evolve to meet the needs of the marketplace. Much of the current gag reaction to ADDIE, like that experienced by Ellen, is based on the old waterfall-linear approach and the assumed instructivist nature of the model.  And again the gag is entirely valid.

However, if you can break free from the history, preconceptions and robotic application of ADDIE, you may find room for something approaching…

 

LOVE (Phase B, Step 2.3.7)

I can’t say I ever use ADDIE in its purest form any longer.  For e-learning and performance applications, I prefer processes with iterative design and development cycles that are usually a variation of rapid application development process like this one from DSDM.

DSDM_lifecycle

Or for an example specific to e-learning,  this process from Cyber Media Creations nicely visualizes the iterative approach:

Or for the Michael Allen fans out there, his Rapid Development approach described in Creating Successful e-Learning is very good.  There is a respectful chapter in the book on the ADDIE limitations and how his system evolved from it.

But at the heart of all these processes are the familiar phases of analysis, design, development, implementation and evaluation,  albeit cycling through them many times along the way.

For me ADDIE has become a useful heuristic,  not even a process really, but a framework for thinking,  coaching instructional designers,  and managing learning and e-learning projects.  Many e-learning designers these days are not formally trained in Instructional Design and initially think of it as instructional “writing” more than the holistic and systemic approach at the heart of ADDIE.   Likewise customers and subject matter experts are much easier to work with once they understand the broad project process that ADDIE represents.  For these two purposes alone I am thankful for ADDIE as a framework .  ADDIE has staying power because of it’s simplicity.  Purists will say it has been watered down too much but in many ways that’s what keeps it alive.

ADDIE phases are also a useful way to think about organization design and structure of a learning function.  They are the major processes that need to be managed and measured by most learning functions.  Just think of the functionality of most LMS systems have added since their inception.

In the end ADDIE (and its more current modifications) is probably most valuable because it makes the work of learning design visible. This is an essential feature of productive knowledge work of all kinds.   Almost every learning/training group uses a ADDIE as a start point to design a customized process that can be communicated,  executed,  measured and repeated with some level of consistency.  Equally important in knowledge work is the discipline of continually improving processes and breaking through to better ways of working.  This has resulted in the many innovations and improvement to the ADDIE process since its inception.

SUMMATIVE EVALUATION (Phase E, Step 5.2.3)

I’ve come to believe that the power of ADDIE/ISD lies in the mind and artful hands of the user.  In my experience Rapid Application Development processes can become just as rigid and prescriptive under the watch of inflexible and bureaucratic leaders as ADDIE did.

There’s an intellectual fashion and political correctness at work in some of the outright rejection of ADDIE.  It’s just not cool to associate with the stodgy old process.  Add Web 2.0, informal and social learning to the mix and some will argue we shouldn’t be designing anything.

For the organizations I work with, there is no end on the horizon to formal learning (adjustments in volume and quality would be nice!).  Formal learning will always require intelligent authentic learning design, and a process to make it happen as quickly and effectively as possible. If instead of  the irrelevant old geezer in the corner waving a disapproving finger,  we think of ADDIE more like a character from Mad Men, maybe we can refresh the image a bit.

Designing Authentic Learning Tasks

The traditional approach to instructional design has been bruised and battered for some years now.  Sometimes the criticism is legitimate and thoughtful and other times it is shallow and faddish.   I think one of the genuine concerns is its deconstruction of learning into small learning tasks which are categorized into learning domains using a learning taxonomy often based on the broad categories of cognitive, affective and psychomotor skills.  Instructional strategies are selected based on their match to learning domain.

Learning methods that are embedded in authentic situations are not merely useful; they are essential Brown, Collins & Duguid. 1989

While this approach can be effective for learning discreet tasks, it struggles when trying to teach more complex skills which almost always contain elements from all domains.   Modern approaches are based more on designing learning in it’s the full social, cognitive and skilled based context.  This implies a more holistic approach rather than the deconstruction approaches of the past (although, the vast majority of instructional design continues to use traditional approaches).

This new wave of learning design models come in a many variations and each has slightly differing methods, philosophies and approaches.   Here are a few:

How to design authentic learning tasks

Authentic learning tasks are whole-task experiences based on real life (work) tasks that integrate skills, knowledge attitude and social context.  Instruction is organized around the whole task, usually in an easy to difficult progression, which “scaffolds” learning support from “lots to little” as learners progress.

Identifying what an authentic learning task can be challenging.  The term is often used without any real guidance on how to identify whole tasks and then transfer them to a training context.  I stumbled on the following framework from Authentic Task Design, a research project of the University of Wollongong in Australia.  They suggest 10 research based elements for the design of authentic tasks in web-based learning environments. I thought it was a useful guide.  Hope you do to.

1. Authentic tasks have real-world relevance

Activities match as nearly as possible the real-world tasks of professionals in practice rather than de-contextualised or classroom-based tasks.

2. Authentic tasks are ill-defined, requiring students to define the tasks and sub-tasks needed to complete the activity

Problems inherent in the tasks are ill-defined and open to multiple interpretations rather than easily solved by the application of existing algorithms. Learners must identify their own unique tasks and sub-tasks in order to complete the major task.

3. Authentic tasks comprise complex tasks to be investigated by students over a sustained period of time
Tasks are completed in days, weeks and months rather than minutes or hours, requiring significant investment of time and intellectual resources.

4. Authentic tasks provide the opportunity for students to examine the task from different perspectives, using a variety of resources
The task affords learners the opportunity to examine the problem from a variety of theoretical and practical perspectives, rather than a single perspective that learners must imitate to be successful. The use of a variety of resources rather than a limited number of preselected references requires students to detect relevant from irrelevant information.

5. Authentic tasks provide the opportunity to collaborate
Collaboration is integral to the task, both within the course and the real world, rather than achievable by an individual learner.

6. Authentic tasks provide the opportunity to reflect
Tasks need to enable learners to make choices and reflect on their learning both individually and socially.

7. Authentic tasks can be integrated and applied across different subject areas and lead beyond domain-specific outcomes
Tasks encourage interdisciplinary perspectives and enable diverse roles and expertise rather than a single well-defined field or domain.

8. Authentic tasks are seamlessly integrated with assessment
Assessment of tasks is seamlessly integrated with the major task in a manner that reflects real world assessment, rather than separate artificial assessment removed from the nature of the task.

9. Authentic tasks create polished products valuable in their own right rather than as preparation for something else
Tasks culminate in the creation of a whole product rather than an exercise or sub-step in preparation for something else.

10. Authentic tasks allow competing solutions and diversity of outcome
Tasks allow a range and diversity of outcomes open to multiple solutions of an original nature, rather than a single correct response obtained by the application of rules and procedures.