Performance X Design content moving to new home

Notice:

I will be closing Performance X Design in the coming months. I have started a new website and blog to support my new business–Gram Consulting. All the posts from Performance X Design have been moved to the new website: gramconsulting.ca.

gram_consulting_logo_outlined_CMYK

If you’d like to subscribe to the new blog, here’s a direct link to the blog landing page:

Gram Consulting Blog

 

 

The Myth of e-Learning Levels of Interaction

For years the eLearning industry has categorized custom solutions into three or more levels of interactivity– from basic to complex, simple to sophisticated.  The implication also being that that learning effectiveness increases with each higher level of interactivity.

You don’t have to look hard to find them:

Levels of Interactivity in eLearning: Which one do you need?

CBT Levels

How Long Does it Take to Create Learning?

These kinds of categorizations surely originated in the vendor community as they sought ways to productize custom development and make it easier for customers to buy standard types of e-learning.  I won’t quibble that “Levels of interactivity” has helped simplify the selling/buying process in the past but it’s starting to outlive it’s usefulness.  And they are a disservice to intelligent buyers of e-learning.  Here’s why: 

1. The real purpose is price standardization

Levels of interactivity are usually presented as a way to match e-learning level to learning goals.  You’ve seen it–Level 1 basic/rapid is best for information broadcast, level 2 for knowledge development and level 3 and beyond for behaviour change or something to that effect.  However, in reality very simple, well designed low end e-learning  can change behavior and high end e-learning programs can wow while provide no learning impact whatsoever.

If vendors were honest the real purpose of “levels of interactivity” is to standardize pricing into convenient blocks to make e-learning easier to sell and purchase.  “How much is an hour of level 2 e-learning again?  OK, I’ll take three of those please”.  each level of e-learning comes with a pre-defined scope that vendors can put a ready price tag on.

It’s a perfectly acceptable product positioning strategy, but it’s not going to get you the best solution to resolving your skill or knowledge issue.

 2. Interactivity levels artificially cluster e-learning features and in doing so reduce choice

Most definitions of what features are included in each of the “levels” are vague at best.  Try this definition from Brandon Hall in wide use:

bh levels

It’s hard to imagine looser definitions of each level.  In fact there are a variety of factors that drive scope and effort (and therefore price) in a custom e-learning program. And they go well beyond “interactivity”.  They include interface choices, type and volume of graphics and media, instructional strategies, existence and quality of current content, and yes, the type and complexity of user interactions.

Each of these features has a range of choices within them, but the levels of interactivity approach tends to cluster everything into one bucket and call it a level.  It might look something like the following:

Levels of e-Learning

A Level 1 program is essentially a template based page turner with a few relevant (or maybe not so relevant) stock images and interactivity limited to some standard multiple choice self checks.   In contrast, a Level III program is loaded with custom designed interface, user controls, media and graphics, along with with complex interactions assumed to be required for simulations and scenarios.   Level 2 is the middle choice most buyers and vendors alike are happy to land on.  None of these choices, by the way, has anything to do with accomplishing a desired learning outcome–but that’s another discussion.

If this artificial clustering of features was ever true, it’s not any longer.  Advanced simulations and scenarios can be created with very basic media and user interface features.  Advanced custom interface and controls with rich custom media are often used to support simple information presentation with very little interactivity.  Powerful scenario based learning can be created with simple levels of interactivity.   Rapid e-learning tools once relegated to the Level 1 ghetto, can create quite advanced programs and custom HTML/Flash can just as easily churn out  page turners.   Out-of-the box avatars can be created in minutes.

This clustering of features into three groups gives you less choice that you would receive at your local Starbucks.  If I’m a customer with a learning objective that is best served by well produced video and animations followed by an on the job application exercise,  I’m not sure what level I would be choosing.   A good e-learning vendor will discuss and incorporate these options into a price model that matches the customer requirement.

3. It reduces effectiveness and creativity

Forcing solutions into one of three or four options stunts creative thinking and pushed the discussion towards media and interactivity rather than closing a skill gap where it should be.

4. It hurts informed decision making

It appears to create a common language but actually reinforces the myth that there are only three or four types of e-learning.  The combinations of media, interactivity and instructional approaches are as varied as the skill and knowledge gaps they are meant to address.

5. It encourages a narrow vision of e-learning

e-Learning has morphed into new forms.  Pure e-learning is already in decline being cannibalized by mobile and social learning and the glorious return of performance support.  These approaches are much more flexible and nimble at addressing knowledge and skill gaps.

Flipped Classroom – A Student Perspective

My son is in a graduate program in Medical Physics at the University of Toronto.  I sent him this recent article from the Atlantic on the concept of the flipped classroom (in higher education).

The Post-Lecture Classroom: How Will Students Fare?

lead_large

He sent a thoughtful email response which was interesting from a student perspective (in addition to seeing more words from him than “send money”!)  He just finished reading Quiet by Susan Cain and I like the connection he makes here.

Good article dad. I’d be interested to actually read the research article  myself, to look at sample sizes and statistical analysis showing their confidence intervals for the data and how statistically significant it is. I think a push towards new and improved teaching techniques is awesome, but you have to be really cautious with it. They mentioned that one year there were student presentations with discussions led by those students, which I have had several courses implement, and it’s a pretty mixed bag. At their worst, those presentations were students presenting the material in just as much of a dry, “Powerpoint poisoning” kind of way as possible. In one class, they were a bit more successful because the prof really helped structure the discussions so we all got a lot out of it.
 
I also have experience with the technique of having us read the book before class and answer clicker questions during lecture, to root out common misconceptions. There were online quizzes before lecture (very simple) on the readings to make sure we did them. I think that this was a VERY effective technique, and made me feel much more engaged in class, and definitely succeeded at giving the professor the information they needed in order to teach the class well, addressing problems that were common in the class.
 
I’m also uncertain about the whole discussion and collaboration in class. That CAN be a good way of engaging the class, but having read “Quiet” by Susan Cain, it definitely seems to fall squarely into this push towards an “extrovert ideal” in education, where it is really designed to most benefit those with more extroverted personalities, and can actually lead to less creativity and innovation, since the consensus in groups will be built by those with the loudest opinion, not necessarily the most informed one. That doesn’t mean this type of teaching isn’t without merit, but I think it should be used with caution, and not be made the centre of the curriculum.
 
Lastly, I think that it’s always important to consider what is actually being taught in these kind of studies, and not apply it to other teaching subjects without due consideration. Graduate Pharmacology seems like a perfectly suited topic for this kind of topic, since everyone is aware of the foundations of the subject, and can focus discussion on “higher-level” stuff like clinical trials. I can tell you that, in my experience, not all attempts to deviate from standard lectures are successful, and there are times, especially at a foundational level, where I personally think that it’s much more beneficial to acquire the knowledge in a more traditional lecture setting, even if it’s less engaging.

New Paths to Learning at Work

I did a three part interview series recently with HRM Online introducing emerging approaches to learning.  You can probably guess what they are:

  • Informal Learning
  • Social Learning
  • Mobile Learning
  • Virtual Learning
  • Gaming and Simulation
  • Performance Support

Part one introduces the new paths to learning at work. I’ll post each part in the series here as they are released.  I’ll be doing a Webinar on the same topic on May 15.  Information on how to register are listed with  the video or you can register here

Here’s the preamble for the video:

“As technology and workplaces change, the way we learn needs to adapt to meet the needs of your employees. Classroom learning and traditional online courses aren’t sufficient on their own anymore, but what replaces it? From social media and mobile technology to gaming and virtual tools, there is a lot to learn and there are lots of tools available”

Click the image to play the video…

Click here to play Embrace new paths to learning

Update May 1/2013:

Here is part 2 of the interview.  The focus is on social and mobile learning with some discussion of gaming and simulation. Click the image to play.

Click here to play Social, mobile and gaming: the new training tools

Update May 14:

Here is the third and final instalment of the interview.  The focus is on virtual learning, performance support and implications for the learning and development professional.

Click here to play What HR needs to know: keep up with new tech for learning

Terms of Engagement

employee engagementThe language of employee engagement is growing in HR and training circles.  Engagement is being used both as an explicit goal and measure of successful interventions.  But what is the relationship between engagement and performance?  Can we assume that more engaged employees perform better?  Taking it further, can we assume that  engagement causes improved performance?

Edward Lawler from the  University of Southern California, Marshall School of Business wrote an  insightful piece (Engagement and Performance: Old Wine in a New Bottle)  for Forbes recently targeting the  assumptions behind engagement and it’s connection to performance.   He writes:

 Let me start by making a fundamental point about behavior at work. People’s attitudes are caused by how they perform, and they determine their performance. In short, they are both a cause and a consequence of behavior.

For some in training and HR (and most managers) engagement is becoming the defining factor for improved performance. Fix the attitude (the person) and you fix the performance. Lawler’s research (among many others) has repeatedly found a reverse relationship; that performance actually determines attitude and engagement. If we truly belief this, it should shape our interventions. In the article he also lists what we know and have known for a long time about the relationship between work attitudes and performance.

1. People who perform well tend to be rewarded better and feel better about themselves and their jobs. As a result of the impact of performance on attitudes, there often is a relationship, although a weak one, between satisfaction and performance.

2. Dissatisfaction causes turnover and absenteeism… Quitting, not showing up for work…are viable methods for improving their work-life…It is wrong to assume that by making employees happy, organizations can improve their performance. It may reduce turnover, absenteeism and as a result lower some costs, but it will not cause employees to be more productive.

3. Motivation is caused by the beliefs and attitudes of employees have about what the consequences of good performance will be. When employees feel that they will receive rewards that they value as a result of their performance they are motivated to perform well. This is true whether the rewards are…“intrinsic awards”…or “extrinsic rewards” such as promotions, pay increases, and praise from others.

His general conclusion is that engagement as a focus is doing more to confuse than to clarify. Organizations need to create supportive work environments that reward individuals for performance. If they do this, they will have motivated and satisfied employees. As Lawler says– It is as “simple” as that. This rings true in my experience. Strategies targeted at diagnosing the work situation, and providing tools and structures that improvement performance will also improve employee engagement. And we know a lot about how to do that!

What do you think? Does job satisfaction improve performance or does excellent performance produce improved job satisfaction and engagement?

Learning Failure in 7 Easy Steps

Picture1We all make mistakes. We know better, but we follow old ways or accept cultural practices that don’t work. There are patterns that produce successful projects and those that lead to failure (see the project death march). I did a recent presentation on the classic mistakes we make in the planning, design and implementation of learning (and how to avoid them). I finished the session with a with a tongue-in-cheek 7 step prescription for a failed learning initiative. Follow them carefully for guaranteed failure.

Step 1: Ensure your program is not connected to a business or performance need

A critical first step.  If your learning initiative in any way contributes to individual or organization performance you’re risking success.  Market your program as urgent and essential to convince your audience they need it (while you’re at it, it’s best if you can convince yourself too).  You then have the option to bury the program somewhere deep in your corporate training catalog or roll it out aggressively as a mandatory program.  Both tactics are equally effective at failing dismally.

Step 2: Rely on your fave management training vendor for program content 

Some say learning programs should be driven by real job or role requirements–that the context in which people work should be the source for learning content.  Don’t believe it.  Instead, close your door, pick up the phone and call your favourite training vendor.  Ask them what training you should be providing this year.  They will have the answer you need and a program with all sorts of great content ready to go.  Another option would be to simply gather as much content as you can from the internet.  Have you seen how much information is out there!

Step 3: Choose a solution that suits you rather than your learners 

There’s so many ways to deliver and support learning now.  Gone are the days where your only option was a classroom training program.  You probably have your favourite.  Trust your gut.  Go with that favourite.  It will be more interesting for you.  Just be sure your choice is detached from the preferences of your audience.

Step 4: Load up on information. Make no provision for practice

Information driven learning is one of the best strategies for failure we know of. Designing practice is hard.  Even harder to design practice that works–on the job activities that develop skill in context. So don’t sweat it. There are great examples out there of power point driven classroom learning, “click next” e-learning, and social learning that’s all about sharing information without actually using any of it. Mimic those examples and you’ll get closer to putting the nail in the coffin of your failed project. But your not quite there yet.

Step 5: Let technology drive your solution

Technology is hip. And they tell us it’s needed to capture the attention of the digital natives entering your organization. So keep it cool. Your program must incorporate the most current devices and tech tools–tablets, smartphones and web 2.0 apps. Don’t think about how they support the objectives of your initiative.

Step 6: Boldly assume the learning will be used on the job

Your program is that good!  It will be more powerful than the current barriers, lack of support and reinforcement that learners will encounter when they are back at work.  Mastery was your goal and no refinement of those skills on the job will be necessary.  Really.

Step 7: Develop your program in a client vacuum

Above all, do not partner with your internal customers to identify business needs or develop a plan to support them through formal or informal learning.  One of the best predictors of success is working collaboratively with your client through the planning, design and development of your program.  Avoid at all costs.  Failure Guaranteed. You’re welcome.

its only a failure quote

Practice and the Development of Expertise (Part 3)

practice

Image courtesy HRVoice

This is a variation of an article I prepared for CSTD and HRVoice, summarizing research on practice and expertise.  Part 1 introduced the signature skills demonstrated by experts that separate them from novices. Part 2 presented the type of practice that develops experts.  This post gets to the implications I see for Learning and Development and makes the connection to existing approaches that embody the principles of deliberate practice.

It would be easy to position deliberate practice in the formal learning camp. Indeed for some physical and routine skills  elements of deliberate practice can be build into formal training programs until a learner reaches mastery.  However, in the modern workplace jobs are more complex and demand greater cognitive (versus physical) skill.  The research findings challenges us to consider how we can better support the full novice to expert journey, embed learning and practice in the job, design experience to include practice and reflection, build tacit knowledge, and design rich feedback. In a past post I listed some general principles for using deliberate practice in learning

Fortunately we have a number of approaches available to us that align well to the conditions of deliberate practice. Most of these approaches are not training in the traditional sense. They do however have a structure to them and require significant support. Consider them more non-formal learning than pure informal learning.  Here are some well defined but under-used learning methods  that match well to deliberate practice.

  • Action Learning. Small teams create a plan of action to solve a real business problem. Impacts of these actions are observed, analyzed, lessons extracted and new actions prepared. This cycle of plan, act, observe, reflect embodies the key elements for deliberate practice. The approach has a significant and growing following. Used frequently for management development, it would be great to see it expanded to other types of professional work. See this  earlier post on action learning. 
  • Cognitive Apprenticeship. The standard apprenticeship model updated for modern knowledge work. Instead of demonstrating a manual skill, experts model and describe their thinking to “apprentices” who then work on the same problem while they articulate and verbalize their own reasoning. The expert provides coaching and feedback to encourage reflection. Support is “scaffolded”–gradually released as skills build and confidence is gained.
  • Communities of Practice. Groups with common professional or project goals work together sharing and discussing best practices. In doing so they develop rich tacit knowledge and the hidden “how to’s” that are often missed in formal learning programs. New knowledge is created in the process of collaborating with others. Social media environments can provide a home for the conversations and knowledge that is created.
  • Simulation and Games. Great simulations are a surrogate for real experience and incorporate authentic  work tasks. This allows the learner to attempt challenging tasks, experience failure and learn from errors–all critical elements of deliberate practice.  I like games that model real work and allow for fun, repeatable practice, but worry about “gamification” that uses game mechanics to motivate employees to use the same old ineffective training.
  • Feedback in the Workflow. Wonderful natural feedback exists in the form of business results and performance data. We don’t tend to think of it as a learning tool, but in the context of deliberate practice, it is one of the most powerful. It requires connecting the data to individual or team behavior. It is the cornerstone of approaches to team learning found in improvement methods like Lean, Six Sigma and performance technology. Here’s a post with some ideas on implementing a learning feedback system
  • Stretch Assignments with Coaching. One of the most powerful approaches to “practice” is challenging work assignments that push current capabilities. Already a staple of executive development, we need to see much more of it for other types of professional development.
  • Open Practice Centers. Instead of tired corporate universities and course catalogs populated with learning programs, Practice Centres could provide progressively challenging practice, simulations and work assignments matched to key job roles. Individualized practice is designed to support the full novice to expert journey using the principles of deliberate practice. Learning “content” is considered only in a support role to accomplish practice goals. Heres an idea for organizing the learning function around practice instead of content and courses. And the core idea applied to Management Development

These approaches and others like them occupy that fuzzy middle ground between informal and formal learning. Each can be aided significantly by social media/social learning and learning technologies. Most importantly however they are approaches that allow us to apply the research on “deliberate practice” to help improve our organizations and in doing so improve our own professional performance.

Practice and the Development of Expertise (Part 2)

Deliberate PracticeThis is a variation of an article I prepared for CSTD and HRVoice, summarizing important research on how practice develops expertise and implications for the learning function.  Part 1 introduced the signature skills demonstrated by experts that separate them from novices.

It seems these signatures of expertise are the result of years of effortful, progressive practice on authentic tasks accompanied by relevant feedback and support, with self-reflection and correction. The research team have labeled this activity “Deliberate Practice”. Others have called it deep practice and intentional practice. It entails considerable, specific, and sustained efforts to do something you can’t do well—or at all. Six elements for are necessary for practice (or on the job experience) to be “deliberate” practice:

  • It must be designed to improve performance. Opportunities for practice must have a goal and evaluation criteria. The goals must be job/role based and authentic. General experience is not sufficient, which is where deliberate practice varies from more laissez-faire approaches to informal learning. Years of everyday experience does not necessarily create an expert. Years of deliberate practice does.  See my post Everyday Experience is not Enough for a deeper discussion on this.
  • It must be based on authentic tasks. The practice must use real work and be performed in context-on the job. The goal is to compile an experience bank, not a vast list of completed formal training programs.
  •  The practice must be challenging. The tasks selected for practice must be slightly outside of the learners comfort zone, but not so far out as produce anxiety or panic. Deliberate practice is hard work and stretches  a person beyond their current abilities. The experience must involve targeted effort, focus and concentration
  • Immediate feedback on results. Accurate and diagnostic feedback must be continuously available both from people (coaches) and the business results produced by the activity. Delayed feedback is also important for actions and decisions with longer term impact as is often the case in knowledge based work.
  • Reflection and adjustment. Self-regulatory and metacognitive skills are essential. This includes self-observation, monitoring, and awareness of knowledge and skill gaps. Feedback requires reflection and analysis to inform behaviour change. Experts make mindful choices of their practice activities.
  •  10,000 hours. For complex work, ten years seems to be the necessary investment of in deliberate practice to achieve expertise. Malcolm Gladwell drew attention to the 10,000 hour rule in his book Outliers and it is one of the most robust findings in this research.  It poses a real challenge for our event based training culture. Of course the less complex the work, the less time required to develop expertise.

If we aspire to evidence-based approaches to learning it’s hard to ignore this body of research. Among other things, it challenges us to consider how we can better support the novice to expert journey, embed learning and practice in the job, design experience to include deliberate practice, build tacit knowledge, and build rich feedback into our organizations.

Fortunately we have a number of approaches available to us that align well to the conditions of deliberate practice. I’ll outline those approaches in Part 3.

Here is a nice illustration of the key process and concepts of deliberate practice courtesy of  Michael Sahota:

Deliberate-Practice-630x357

Practice and the Development of Expertise (Part 1)

I’ve been presenting recently on the research and application of Deliberate Practice for developing expertise in the workplace.  I’ll be doing so again at the upcoming British Columbia Human Resources Management Association (BCHRMA) conference on May 2.  A version of following article on the topic appeared in the CSTD Learning Journal.   A modified version will also appear in an upcoming issue of HR Voice.   It is posted here in three parts.

******

expert We all know well-designed practice is a critical for effective training.  It’s what differentiates meaningful learning from passive information presentation.  But as work becomes more complex and knowledge-based, are the practice activities we design for our formal learning programs (both classroom and e-learning) enough to meet the need for expertise in the modern workplace? A comprehensive body of research on how professional expertise is developed suggests it may not be.

This research, led largely by Anders Ericsson at Florida State University and popularized in recent books such as Malcom Gladwell’s Outliers and Geoff Colvin’s Talent is Overrated, indicates that the type of practice needed to develop true expertise is more intensive and “deliberate” than we thought, and that it must be embedded in the context of real work.  Also, it must occur on a regular basis over much longer periods of time. The research has implications for us as learning and performance professionals.  It argues for a profound shift away from event based formal learning to approaches that could be categorized as informal learning or learning from experience.  However, it turns out not all experience is created equal when developing expertise, so simplistic notions of informal learning also won’t work.  So how should we rethink the design of practice if it is to truly develop the complex skills of the knowledge workplace?  To answer that question it helps to first understanding what expertise looks like.

Characteristics of expert performance

Ericsson’s research has found that top performing individuals at work, besides being very good at what they do, consistently demonstrate the following differences compared to novices and lower performing individuals

  • They perceive more.  Experts see patterns, make finer discriminations, interpret situations more quickly and as a result make faster, more accurate decisions.  Novices slowly review all information and don’t have the contextual experience to recognize patterns
  • They know more.  Not only do experts have more facts and details available to them, they have more tacit knowledge–that all-important unconscious “know how” that only comes with experience.  Novices rely on limited explicit knowledge
  • They have superior mental models.  Experience helps experts have rich internal representations of how things work and how knowledge is connected.  They use this to learn and understand situations more rapidly. Novices rely on simple, sometimes inaccurate, rules of thumb and loosely connected knowledge
  • They use personal networks more effectively.  Experts know who to go to for help and answers.   Novices are not able to identify access critical information and people as quickly
  • They have superior “meta-cognition”. Experts are better self-monitors than novices.  They set goals, self evaluate against a standard, and make corrections and adjustments more quickly from feedback

novice to expert

These are skills we want in all employees.  At times they seem like they come from an innate ability or deep “competency” unachievable by others.   However the research shows that while natural ability may play a small role, practice and experience are far more significant.  The nature of this experience is critical.  “Practice makes perfect” is only true for practice of a certain type which I describe in part 2.

Epic Learning Fail

If you’ve been in the learning business for a while you’ve likely seen a few examples where learning initiatives have simply missed the mark. They didn’t produce the anticipated return on investment.  Planned performance improvement did not materialize. Learners didn’t develop the skills targeted by the program. Or if they did they certainly aren’t being applied on the job. Maybe all of the above.

Failures like these are more common than we like to think. Sometimes they’re not as visible as other more tangible areas of the business like manufacturing defects (Toyota’s sticky breaks) or financial misadventures (take your pick).  When substantial resources have been committed to well intentioned, hard to measure initiatives like training, sometimes success is declared when real evidence suggest otherwise (somehow I’m seeing George Bush on the deck of an aircraft carrier here). The phenomenon is not limited to formal training. Informal learning and I suspect a few recent social learning/social media efforts have met with limited success.

If you’re honest, some of those programs might have even been your own (I know a few of mine have had less than stellar impact).  Or you may have had a role in a larger effort.   Perhaps you designed it, or identified the original need, or delivered it, or managed the implementation effort. Maybe you created an impeccable design that was poorly implemented or visa versa. Learning can derail in many ways.

If we follow our own advice, there is a lot of learning to be had in these partial successes and outright failures. I’m doing a presentation soon titled When Learning Fails: Classic Mistakes and How to Avoid Them. I’m interested in your anecdotes and stories. Have you been involved in a learning program that did not go so well?  Have your observed any train wrecks or near misses?  What caused the issues? What was learned that helped improve your next initiative?

Please post any input in the comments section below or enter a link to a blog post or Tweet to #epiclearningfail. I’ll be reviewing  for the most common causes and learnings.   I’ll summarize in later posts.   Looking forward to your responses.

Update: here is the presentation from National CSTD conference Nov 12. 2012