The Myth of e-Learning Levels of Interaction

For years the eLearning industry has categorized custom solutions into three or more levels of interactivity– from basic to complex, simple to sophisticated.  The implication also being that that learning effectiveness increases with each higher level of interactivity.

You don’t have to look hard to find them:

Levels of Interactivity in eLearning: Which one do you need?

CBT Levels

How Long Does it Take to Create Learning?

These kinds of categorizations surely originated in the vendor community as they sought ways to productize custom development and make it easier for customers to buy standard types of e-learning.  I won’t quibble that “Levels of interactivity” has helped simplify the selling/buying process in the past but it’s starting to outlive it’s usefulness.  And they are a disservice to intelligent buyers of e-learning.  Here’s why: 

1. The real purpose is price standardization

Levels of interactivity are usually presented as a way to match e-learning level to learning goals.  You’ve seen it–Level 1 basic/rapid is best for information broadcast, level 2 for knowledge development and level 3 and beyond for behaviour change or something to that effect.  However, in reality very simple, well designed low end e-learning  can change behavior and high end e-learning programs can wow while provide no learning impact whatsoever.

If vendors were honest the real purpose of “levels of interactivity” is to standardize pricing into convenient blocks to make e-learning easier to sell and purchase.  “How much is an hour of level 2 e-learning again?  OK, I’ll take three of those please”.  each level of e-learning comes with a pre-defined scope that vendors can put a ready price tag on.

It’s a perfectly acceptable product positioning strategy, but it’s not going to get you the best solution to resolving your skill or knowledge issue.

 2. Interactivity levels artificially cluster e-learning features and in doing so reduce choice

Most definitions of what features are included in each of the “levels” are vague at best.  Try this definition from Brandon Hall in wide use:

bh levels

It’s hard to imagine looser definitions of each level.  In fact there are a variety of factors that drive scope and effort (and therefore price) in a custom e-learning program. And they go well beyond “interactivity”.  They include interface choices, type and volume of graphics and media, instructional strategies, existence and quality of current content, and yes, the type and complexity of user interactions.

Each of these features has a range of choices within them, but the levels of interactivity approach tends to cluster everything into one bucket and call it a level.  It might look something like the following:

Levels of e-Learning

A Level 1 program is essentially a template based page turner with a few relevant (or maybe not so relevant) stock images and interactivity limited to some standard multiple choice self checks.   In contrast, a Level III program is loaded with custom designed interface, user controls, media and graphics, along with with complex interactions assumed to be required for simulations and scenarios.   Level 2 is the middle choice most buyers and vendors alike are happy to land on.  None of these choices, by the way, has anything to do with accomplishing a desired learning outcome–but that’s another discussion.

If this artificial clustering of features was ever true, it’s not any longer.  Advanced simulations and scenarios can be created with very basic media and user interface features.  Advanced custom interface and controls with rich custom media are often used to support simple information presentation with very little interactivity.  Powerful scenario based learning can be created with simple levels of interactivity.   Rapid e-learning tools once relegated to the Level 1 ghetto, can create quite advanced programs and custom HTML/Flash can just as easily churn out  page turners.   Out-of-the box avatars can be created in minutes.

This clustering of features into three groups gives you less choice that you would receive at your local Starbucks.  If I’m a customer with a learning objective that is best served by well produced video and animations followed by an on the job application exercise,  I’m not sure what level I would be choosing.   A good e-learning vendor will discuss and incorporate these options into a price model that matches the customer requirement.

3. It reduces effectiveness and creativity

Forcing solutions into one of three or four options stunts creative thinking and pushed the discussion towards media and interactivity rather than closing a skill gap where it should be.

4. It hurts informed decision making

It appears to create a common language but actually reinforces the myth that there are only three or four types of e-learning.  The combinations of media, interactivity and instructional approaches are as varied as the skill and knowledge gaps they are meant to address.

5. It encourages a narrow vision of e-learning

e-Learning has morphed into new forms.  Pure e-learning is already in decline being cannibalized by mobile and social learning and the glorious return of performance support.  These approaches are much more flexible and nimble at addressing knowledge and skill gaps.

Everyday Experience is Not Enough

A core tenet of informal and social learning is that we learn through experience. It’s the elephant in the 70-20-10 room. It’s often used as an admonishment to formal learning. Advocates of the most laissez-faire approaches informal learning suggest that given the right tools (social anyone?) employees will do just fine without all the interference by the learning department, thank you very much.

No one in their right mind would argue that experience is not a powerful teacher, or that our most valuable learning occurs while working. But it’s pretty broad generalization don’t you think? Some experiences must be more valuable than others for achieving learning and performance goals. And if so, what makes those experiences more valuable and how do we know them when we see them? Or, from the perspective of the learning professional, how can we help create the right experiences to help people develop their skills? These seem to be important questions if we are to get beyond loose approaches to informal learning.

Indeed research in developing expertise has shown that not all experience is created equal. Years of experience in a domain does not invariably lead to expert levels of performance. Most people after initial training and a few years of work reach a stable, acceptable level of performance and maintain this level for much of the rest of their careers. Contrast that with those that continue to improve and eventually achieve the highest levels of expertise. It seems that where high performers may have 20 years experience , average performers seem to have 1 year of experience 20 times!

The following chart from the body of research on developing expertise, illustrates the results of different types of “experience” on workplace performance.

Ericsson K.A., "The Influence of Experience and Deliberate Practice on the Development of Expert Performance” The Cambridge handbook of expertise and expert performance (2006)

Average performers learn just enough from their environment (experience) to perform everyday skills with a minimal amount of effort. In contrast, experts continually challenge their current performance and seek feedback from their environment to stay in a more or less permanent learning state, mastering everyday skills but continuously raising their personal bar. This deliberate approach to learning from experience is what separates top performers from the norm. Continuously challenging current skills is hard work and it takes it’s toll. Some decrease or stop their focus on deliberate practice and never achieve the excellence of the expert (arrested development).

Designing experience

So, performance does not improve simply through cumulative everyday experience gained face to face, using social media or otherwise. It requires targeted effortful practice an environment rich in accurate and timely feedback. That does not mean formal training.  It does means experience designed and targeted to develop skills and expertise. This is a very different thing than routine, everyday work experience.

Some of the best learning approaches that work well in helping people challenge their current skill levels fall into that fuzzy middle ground between formal and informal learning (see this post for a continuum of learning experiences) and can include the following:

Designing, fostering and supporting work experiences that develop expertise is an emerging role for the learning professional. That role is to assure that people are working in a setting where they can challenge and develop their knowledge and skills. You can’t make them learn but you can help surround them with the resources they need to learn. This approach to learning is truly a partnership between the individual, their managers and you as a learning professional. In doing that work you are practicing and developing your own expertise.

Practice Makes Perfect Revisited

Last Thursday (November  17), I presented a session on the use of deliberate practice in learning and performance at the CSTD national conference in Toronto.  I promised participants that I would post the slide set on this blog.  I’m a little slow getting to it but here you are.  They are not fully explanatory, but if you would like to discuss any aspects of the presentation or how you might use the principles in your organization, please contact me and I’d be happy to discuss.  If you are one of the participants that came to the presentation, thank you for contributing to a lively session!

Here’s the session description for the CSTD conference web site:

I think I manged to touch most of the objectives listed.  When I read the popular books mentioned in the description I was intrigued that they all drew on the same source research.  I have posted on this research in the past.  Over the course of the last couple of years I dove into that source research and what I learned was the focus of the presentation along with connections that I made to current approaches to practice in the workplace (primarily informal).  Most of the practice approaches described stress authentic tasks and problems, development of tacit knowledge and practical intelligence and the critical role of feedback learning process.  I’ll try to post on some of the key concepts from the research in the future.  Cheers!

Evaluating with the Success Case Method

In my last post I mentioned that I prefer the Success Case Method for evaluating learning (and other) interventions to the Kirkpatrick approach. A few readers contacted me asking for information on the method and why I prefer it. Here’s a bit of both.

About the Success Case Method

The method was developed by Robert Brinkerhoff as an alternative (or supplement) to the Kirkpatrick approach and its derivatives. It is very simple and fast (which is part of it’s appeal) and goes something like this:

Step 1. Identify targeted business goals and impact expectations

Step 2. Survey a large representative sample of all participants in a program to identify high impact and low impact cases

Step 3. Analyze the survey data to identify:

  • a small group of successful participants
  • a small group unsuccessful participants

Step 4. Conduct in-depth interviews with the two selected groups to:

  • document the nature and business value of their application of learning
  • identify the performance factors that supported learning application and obstacles that prevented it.

Step 5. Document and disseminate the story

  • report impact
  • applaud successes
  • use data to educate managers and organization

The process produces two key outputs

  • In-depth stories of documented business effect that can be disseminated to a variety of audiences
  • Knowledge of factors that enhance or impede the effect of training on business results. Factors that are associated with successful application of new skills are compared and contrasted with those that impede training.

It answers practical and common questions we have about training and other initiatives:

  • What is really happening? Who’s using what, and how well? Who’s not using things as planned? What’s getting used, and what isn’t? Which people and how many are having success? Which people and how many are not?
  • What results are being achieved? What value, if any, is being realized? What goals are being met? What goals are not? Is the intervention delivering the promised and hoped for results? What unintended results are happening?
  • What is the value of the results? What sort of dollar or other value can be placed on the results? Does the program appear to be worthwhile? Is it producing results worth more than its costs? What is its return on investment? How much more value could it produce if it were working better?
  • How can it be improved? What’s helping? What’s getting in the way? What could be done to get more people to use it? How can everyone be more like those few who are most successful?

Here’s a good Brinkerhoff article from a 2005 issue of Advances in Developing Human Resources on the method. The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training

There are some important differences between Kirkpatrick Based Methods and the Success Case Method. The following table developed by Brinkerhoff differentiates the two approaches.

Why I like it

Here are five reasons:

1. Where Kirkpatrick (and Philips and others) focus on gathering proof of learning effectiveness and performance impact using primarily quantitative and statistical measures, the Success Case Method focuses on gathering compelling evidence of effectiveness and impact through qualitative methods and naturalistic data gathering. Some organizational decisions require hard proof and statistical evidence. In my experience training is not one of them. At best, training decisions are usually judgment calls using best available information at the time. Statistical proof is often overkill and causes  managers  to look at each other in amusement.  All they really need is some good evidence, some examples of where things are going well and where they aren’t. They are happy to trade statistical significance for authentic verification from real employees.

2. We spend a lot of time twisting ourselves in knots trying to isolate the effects of training from other variables that mix with skills to impact performance. Factors such as opportunity to use the skills, how the skills are supported,  consequences of using the skills and others all combine to produce performance impact. Only we are hell bent on separating these factors. Our clients (internal and external) are interested only in the performance improvement. In the end it is irrelevant to them whether it was precisely the training that produced the improvement. They simply would like some confirmation that an intervention improved performance, and when it didn’t how we can modify it and other variables to make it work. Success case method accepts that other factors are at work when it comes to impact on performance and concentrates on the impact of the overall intervention.

3. The approach can be used for any type of intervention designed to improve performance, including training, performance support systems, information solutions, communities of practice, improved feedback systems, informal and semi-structured learning initiatives and social learning initiatives.

4. Success Case Method results are documented and presented as “stories”. We have learned the power of stories for sharing knowledge in recent years. Why not use the same approach to share our evaluation results instead of the dry and weighty tombs of analysis we often produce

5. It’s fast and it’s simple and has a growing track record.

To learn more:

The Success Case Method: Find Out Quickly What’s Working and What’s No

Telling Training’s Story: Evaluation Made Simple, Credible, and Effective

High Impact Learning: Strategies For Leveraging Performance And Business Results From Training Investments

CSTD/IFTDO Conference Presentations

This year the Canadian Society for Training and Development (CSTD) and the International Federation of Training and Development Organisations (IFTDO) are combining for a single conference event in Toronto that I’m looking forward to, both as a participant and presenter. Here are some highlights and the dates for my own presentations. I hope some of you can make it!

Tuesday (Oct 19) is dedicated to “Research into Practice”, a topic near and dear to me. All presentations  on Tuesday are based on the theme. Allison Rosset will discuss the importance of research in guiding instructional practice, Harold Stolovich on performance improvement research, Traci Sitzmann on e-learning research and Christine Wihak on what we research tells us about informal learning.

I will be presenting a Trading Post session on Tuesday at 2:00 pm titled Getting Informal: Merging learning and Work through Informal Learning (Here is the handout).  It’s based on many of the concepts I have presented in this blog, particularly Leveraging the Full Learning Continuum and the 10 Strategies for Integrating Learning and Work series.

A Thought Leaders series begins on Wednesday which will include sessions by Marc Rosenberg (on Learning 2.0), Patti Shank (on common errors in learning design) and Bob Morton (on change management). My new employer (Nexient Learning) is also presenting a case study with Deliotte on Managerial Effectiveness that I’m looking forward to.

I will be presenting on a Learning Technology Thought Leaders panel session on Thursday (20th) titled: Enterprise Solutions, Managing the Training Function. I’ll be on the panel with Harold Jarche, Sheryl Herle, Sheri Philips and Gary Woodill (from the Brandon Hall team) We will thrash around the pros and cons of Learning Management Systems. The session is moderated by Saul Carliner from Concordia University. No lack of opinion in that group! Should be interesting.

Thursday also includes a keynote by Peter Senge whose work I admire and have posted on in the past.

If you happen to be there please stop by one of my sessions and say hello.

Moving on…but the blog stays.

You’ll notice I’ve stripped my web site down to only the blog entries this week.  This is because I’ve decided to take a full time position and close out Gram Consulting as a business.  I have enjoyed my work over the last few years with clients and associates, so the decision was not an easy one.  I hope to work with many of them again in the future.

The new position however, offers broader opportunity and challenge, including working for an industry leader in corporate learning and development and a great team of people.  My new role is Vice President, Leadership and Business Solutions with Global Knowledge. I ‘ll be managing the consulting, design and development of custom e-learning and performance improvement projects as well as a portfolio of excellent leadership and business skills development programs.

I’m planning to keep the blog alive.  In the next few weeks I plan to move  it to a new home and adjust the format and branding a bit.   It will cover similar ground,  so if you like what you’ve read so far, I hope you follow the new blog after the transition.   I’ll be bringing all the existing posts to the new site.   In the meantime I’ll likely post a few more items to this location.

I’ve truly enjoyed your comments both through the blog and personal connections.  The blog has enjoyed a steady increase in readership since I started a little less than a year ago.  Thank you for that.   I’ve enjoyed the conversation.  I hope we can keep it going.

Cheers,

Tom