Terms of Engagement

employee engagementThe language of employee engagement is growing in HR and training circles.  Engagement is being used both as an explicit goal and measure of successful interventions.  But what is the relationship between engagement and performance?  Can we assume that more engaged employees perform better?  Taking it further, can we assume that  engagement causes improved performance?

Edward Lawler from the  University of Southern California, Marshall School of Business wrote an  insightful piece (Engagement and Performance: Old Wine in a New Bottle)  for Forbes recently targeting the  assumptions behind engagement and it’s connection to performance.   He writes:

 Let me start by making a fundamental point about behavior at work. People’s attitudes are caused by how they perform, and they determine their performance. In short, they are both a cause and a consequence of behavior.

For some in training and HR (and most managers) engagement is becoming the defining factor for improved performance. Fix the attitude (the person) and you fix the performance. Lawler’s research (among many others) has repeatedly found a reverse relationship; that performance actually determines attitude and engagement. If we truly belief this, it should shape our interventions. In the article he also lists what we know and have known for a long time about the relationship between work attitudes and performance.

1. People who perform well tend to be rewarded better and feel better about themselves and their jobs. As a result of the impact of performance on attitudes, there often is a relationship, although a weak one, between satisfaction and performance.

2. Dissatisfaction causes turnover and absenteeism… Quitting, not showing up for work…are viable methods for improving their work-life…It is wrong to assume that by making employees happy, organizations can improve their performance. It may reduce turnover, absenteeism and as a result lower some costs, but it will not cause employees to be more productive.

3. Motivation is caused by the beliefs and attitudes of employees have about what the consequences of good performance will be. When employees feel that they will receive rewards that they value as a result of their performance they are motivated to perform well. This is true whether the rewards are…“intrinsic awards”…or “extrinsic rewards” such as promotions, pay increases, and praise from others.

His general conclusion is that engagement as a focus is doing more to confuse than to clarify. Organizations need to create supportive work environments that reward individuals for performance. If they do this, they will have motivated and satisfied employees. As Lawler says– It is as “simple” as that. This rings true in my experience. Strategies targeted at diagnosing the work situation, and providing tools and structures that improvement performance will also improve employee engagement. And we know a lot about how to do that!

What do you think? Does job satisfaction improve performance or does excellent performance produce improved job satisfaction and engagement?

Learning Failure in 7 Easy Steps

Picture1We all make mistakes. We know better, but we follow old ways or accept cultural practices that don’t work. There are patterns that produce successful projects and those that lead to failure (see the project death march). I did a recent presentation on the classic mistakes we make in the planning, design and implementation of learning (and how to avoid them). I finished the session with a with a tongue-in-cheek 7 step prescription for a failed learning initiative. Follow them carefully for guaranteed failure.

Step 1: Ensure your program is not connected to a business or performance need

A critical first step.  If your learning initiative in any way contributes to individual or organization performance you’re risking success.  Market your program as urgent and essential to convince your audience they need it (while you’re at it, it’s best if you can convince yourself too).  You then have the option to bury the program somewhere deep in your corporate training catalog or roll it out aggressively as a mandatory program.  Both tactics are equally effective at failing dismally.

Step 2: Rely on your fave management training vendor for program content 

Some say learning programs should be driven by real job or role requirements–that the context in which people work should be the source for learning content.  Don’t believe it.  Instead, close your door, pick up the phone and call your favourite training vendor.  Ask them what training you should be providing this year.  They will have the answer you need and a program with all sorts of great content ready to go.  Another option would be to simply gather as much content as you can from the internet.  Have you seen how much information is out there!

Step 3: Choose a solution that suits you rather than your learners 

There’s so many ways to deliver and support learning now.  Gone are the days where your only option was a classroom training program.  You probably have your favourite.  Trust your gut.  Go with that favourite.  It will be more interesting for you.  Just be sure your choice is detached from the preferences of your audience.

Step 4: Load up on information. Make no provision for practice

Information driven learning is one of the best strategies for failure we know of. Designing practice is hard.  Even harder to design practice that works–on the job activities that develop skill in context. So don’t sweat it. There are great examples out there of power point driven classroom learning, “click next” e-learning, and social learning that’s all about sharing information without actually using any of it. Mimic those examples and you’ll get closer to putting the nail in the coffin of your failed project. But your not quite there yet.

Step 5: Let technology drive your solution

Technology is hip. And they tell us it’s needed to capture the attention of the digital natives entering your organization. So keep it cool. Your program must incorporate the most current devices and tech tools–tablets, smartphones and web 2.0 apps. Don’t think about how they support the objectives of your initiative.

Step 6: Boldly assume the learning will be used on the job

Your program is that good!  It will be more powerful than the current barriers, lack of support and reinforcement that learners will encounter when they are back at work.  Mastery was your goal and no refinement of those skills on the job will be necessary.  Really.

Step 7: Develop your program in a client vacuum

Above all, do not partner with your internal customers to identify business needs or develop a plan to support them through formal or informal learning.  One of the best predictors of success is working collaboratively with your client through the planning, design and development of your program.  Avoid at all costs.  Failure Guaranteed. You’re welcome.

its only a failure quote

Epic Learning Fail

If you’ve been in the learning business for a while you’ve likely seen a few examples where learning initiatives have simply missed the mark. They didn’t produce the anticipated return on investment.  Planned performance improvement did not materialize. Learners didn’t develop the skills targeted by the program. Or if they did they certainly aren’t being applied on the job. Maybe all of the above.

Failures like these are more common than we like to think. Sometimes they’re not as visible as other more tangible areas of the business like manufacturing defects (Toyota’s sticky breaks) or financial misadventures (take your pick).  When substantial resources have been committed to well intentioned, hard to measure initiatives like training, sometimes success is declared when real evidence suggest otherwise (somehow I’m seeing George Bush on the deck of an aircraft carrier here). The phenomenon is not limited to formal training. Informal learning and I suspect a few recent social learning/social media efforts have met with limited success.

If you’re honest, some of those programs might have even been your own (I know a few of mine have had less than stellar impact).  Or you may have had a role in a larger effort.   Perhaps you designed it, or identified the original need, or delivered it, or managed the implementation effort. Maybe you created an impeccable design that was poorly implemented or visa versa. Learning can derail in many ways.

If we follow our own advice, there is a lot of learning to be had in these partial successes and outright failures. I’m doing a presentation soon titled When Learning Fails: Classic Mistakes and How to Avoid Them. I’m interested in your anecdotes and stories. Have you been involved in a learning program that did not go so well?  Have your observed any train wrecks or near misses?  What caused the issues? What was learned that helped improve your next initiative?

Please post any input in the comments section below or enter a link to a blog post or Tweet to #epiclearningfail. I’ll be reviewing  for the most common causes and learnings.   I’ll summarize in later posts.   Looking forward to your responses.

Update: here is the presentation from National CSTD conference Nov 12. 2012

Four Learning Traps to Avoid

The Learning End GameTrap

Perhaps you’ve re-committed to improve learning as the mission of your department (or next big initiative, or…).  It’s well meaning but can be self defeating (or worse, self-fulfilling). The term leaves the impression that learning is the end game, your raison d’être. The real end game is performance; individual and organizational, defined in terms the business uses to measure itself. Sure, you don’t have control over all the factors that influence performance, but that doesn’t mean your solutions can’t be intimately connected to them. Thinking performance first is liberating and opens up whole new perspectives on the types of solutions you can and should be bringing to the table.

Antidote to the end game trap:  Performance Thinking (Cathy Moore and Carl Binder have nice methods for deriving learning from performance needs)

The Planning Trap

I used to believe in the perfect annual plan all wrapped in MBO goodness, aligned and linked to organizational objectives. But over time I’ve come to two conclusions. First, the plans are rarely fully realized. The more interesting innovations and strategies emerged from responses to opportunities throughout the year. Second, senior teams rarely have their act together enough to create strategies and business plans that are meaningful enough to wrap a good training plan around. Highly analytic planning processes can deceive us into thinking we are planning strategically and improving future organizational performance.

To borrow an argument from Henry Mintzberg, strategy is actually a pattern embodied in day to day work more than an annual plan. Strategy is consistency in behaviour, whether or not intended. Formal plans may go unrealized, while patterns may emerge from daily work. In this way strategy can emerge from informal learning. I’ve always liked this image of planning from Mintzberg:

from Henry Mintzberg “The Rise and Fall of Strategic Planning” (1994)

Antidote to the planning trap:  Beware the best laid plans. Go ahead and work with your business units to create a simple training plan linked to whatever business plans they may have in place. But have a rock solid process in place to respond to the requests that will inevitably come that are not in line with the plan. Be ready to develop solutions to adapt quickly to whatever white water your company or industry happens to be swimming in. Be nibble and flexible in response to business changes. Watch for patterns and successes in that work and incorporate them in your training strategy.

The Measurement and Evaluation trap

Evaluation is a hot button that causes more wringing of hands and professional guilt than it should. Evaluation is meant to inform decisions. Some types training are simply easier to measure than others. Everything can be measured, but not everything is worth measuring. When you do evaluate use business metrics already in use and explore methods focused more on collecting evidence of success rather than definitive proof. Myopic and overly rigorous measurement drives out judgment and causes us to start measuring trees and forget about the forest. Our attempts at evaluation are often disproportionate to evaluation elsewhere in the organization (we only think everyone else knows their ROI).

Antidote to the measurement trap: Don’t emphasize short term ROI or cost reduction measures at the expense of true investment in the future that do not have immediate and calculable ROI. When you do evaluate use existing measures that the business uses to judge success.

The Technology Trap

We seem to be hard wired to line up enthusiastically behind each new wave of technology. Each wave has left behind tools and innovations that changed learning for the better (and also, embarrassingly, for the worse). It offers increasingly wondrous ways to improve access to learning, immerse employees in true learning experiences, share learning in collaborative spaces and extend the tools we use to do our work. And it offers equally wondrous ways to create truly disastrous learning experiences.

Antidote for the technology trap: Understand and embrace technology, especially game changing social media, but protect yourself from panacea thinking and keep your eye on the prize of improved performance.  Success lies in the design not the technology.

Everyday Experience is Not Enough

The core belief of informal and social learning advocates is that we learn best through everyday experience. Advocates of the most laissez-faire approaches informal learning suggest that given the right tools employees will do just fine without all the interference by the learning department, thank you very much.

No one can argue that experience is not a powerful teacher, or that our most valuable learning occurs while working. But it’s pretty broad generalization don’t you think? Some experiences must be more valuable than others for achieving learning and performance goals. And if so, what makes those experiences more valuable and how do we know them when we see them? Or, from the perspective of the learning professional, how can we help create the right experiences to help people develop their skills? These seem to be important questions if we are to get beyond loose approaches to informal learning.

Indeed research in developing expertise has shown that not all experience is created equal. Years of experience in a domain does not invariably lead to expert levels of performance. Most people after initial training and a few years of work reach a stable, acceptable level of performance and maintain this level for much of the rest of their careers. Contrast that with those that continue to improve and eventually achieve the highest levels of expertise. It seems that where high performers may have 20 years experience , average performers seem to have 1 year of experience 20 times!

The following chart from the body of research on developing expertise, illustrates the results of different types of “experience” on workplace performance.

Ericsson K.A., “The Influence of Experience and Deliberate Practice on the Development of Expert Performance” The Cambridge handbook of expertise and expert performance (2006)

Average performers learn just enough from their environment (experience) to perform everyday skills with a minimal amount of effort. In contrast, experts continually challenge their current performance and seek feedback from their environment to stay in a more or less permanent learning state, mastering everyday skills but continuously raising their personal bar. This deliberate approach to learning from experience is what separates top performers from the norm. Continuously challenging current skills is hard work and it takes it’s toll. Some decrease or stop their focus on deliberate practice and never achieve the excellence of the expert (arrested development).

Designing experience

So, performance does not improve simply through cumulative everyday experience gained face to face, using social media or otherwise. It requires targeted effortful practice an environment rich in accurate and timely feedback. That does not mean formal training.  It does means experience designed and targeted to develop skills and expertise. This is a very different thing than routine, everyday work experience.

Some of the best learning approaches that work well in helping people challenge their current skill levels fall into that fuzzy middle ground between formal and informal learning (see this post for a continuum of learning experiences) and can include the following:

Designing, fostering and supporting work experiences that develop expertise is an emerging role for the learning professional. That role is to assure that people are working in a setting where they can challenge and develop their knowledge and skills. You can’t make them learn but you can help surround them with the resources they need to learn. This approach to learning is truly a partnership between the individual, their managers and you as a learning professional. In doing that work you are practicing and developing your own expertise.

I Want it Now!

The Learning Circuits Blog big question this month is:

How do you respond to the “I want it now!” request from a demanding executive?

They provide the scenario of  a Type A executive with a website open on rapid instructional design prompting the “I want it now” request.  (Hard to imagine i know, and if true presented an excellent “teachable moment” with that executive!).

While “I want it now!” is a common demand on training functions, it’s certainly not unique to us.  Ask the IT, Marketing, or Administration function and you’ll hear the same groans of recognition.  The strategies for dealing with the situation are the same and they have more to do with relationship building and consulting approaches than anything related to how quickly you can throw together a training program to meet the request.

Here are a few strategies that might help.

1. Prevention

The best strategy is a preventative one.   Training functions should have annual plans in place with their internal clients identifying skill and capability development priorities based on the business and functional needs.  The plan is ideally part of the planning cycle of the organization so business needs are “in the air” and being cascaded through the organization on a number of fronts.  For each group you support, the plan could include:

  • strategic and operating goals for the year
  • pivotal roles involved in achieving them
  • skill, knowledge and capabilities required for each role
  • training and learning approaches and programs to be developed/acquired
  • agreement on responsibilities of client and learning function
  • review plan

The joint planning process itself helps build mutual understanding of the requirements for meaningful learning solutions and informal learning regarding those situations when knee-jerk training is not a solution at all.  You may be lucky enough never to get the “I want it now” request at all.  When they do come (and you know they will)  and they deviate from the jointly developed plan, you can legitimately ask what other priorities need to be dropped, and what resources need to be added, in order to fulfill this new request.

2. Rapid performance analysis

Along with your heart rate, the “I want it now” demand should raise your performance antenna.  This is your opportunity to apply the performance analysis process you know well in theory if not in practice.  There may be other root causes at work and you have a responsibility to suss them out.

But that’s not what your Type A executive asked you to do is it?  So this is not the time for a lengthy root cause analysis, but it does justify a rapid performance analysis.   This is why relationship building and consulting skills are as important as process skills for the learning professional.  You will need to muster your knowledge of the organization, your business acumen and the factors that impact performance to quickly ask the right questions to get to the bottom of the “issue”.  Responding to the “I want it now” demand with an analysis is tricky.  Do it well however and you may earn the respect that will avoid future “I want it now” demands.

There are a number of rapid analysis tools available that are based on “performance thinking” approaches.  For years I’ve used my own adaptation of Thomas Gilbert’s PROBE questioning heuristic.  There are many others.

2. Provide the learning “artifact” but fix the real problem

Your rapid performance analysis may indeed point to other root causes.  If your executive is blind to this despite your best efforts, you may need to provide the learning artifact but sneak in the real solution while doing so.

The “I want it now” executive is usually not as specific as you may be on what a learning program actually is.  This frees you to design a “program” that can be a performance solution in the guise of a training program.  But if you’re A type exec wants you to call it that, why not?.  If your program includes an improved feedback system, better information resources and a process fix all communicated through small training session to help employees use these simple performance support tools,  you have fixed the problem and provided a “training program”.  And again, often all of this can be done faster then slogging through the development cycle of a full blown training program loaded up with rigourously defined learning objectives and practice activities.

4. Do it!

Your executive may be dead on.  Don’t discount this possibility.  Things change pretty quickly in business these days. Roles and skills can take unexpected turns to meet emerging business requirements. At least you have an executive that considers the importance of skills and knowledge needs and cares enough to make the call to you,  abrupt as it may be.  If that is the case, then perhaps a rapid solution is exactly what is called for.  A lean program built with basic job aids, performance support tools, and creative information design can often be done even faster than “rapid e-learning”.  Rapid tools are made for this kind of scenario and they can be very useful.  But creative thinking with a laser focus on actual performance requirements might be even faster.

5. Don’t do it!

If your professional judgment tells you that this project would be folly and waste important resources,  or if it you believe it has lass than a 50% chance of success, you have the responsibility to say no.  Be prepared to back up your response and provide alternatives if appropriate.   This is a risk not many take with the type A executive demand.  The upside is intact professional integrity and the knowledge that  you have saved the company some wasted effort.   The downside is…well that’s what keeps life interesting isn’t it 🙂

Each of these strategies requires strong professional judgment, authentic relationships and sound consulting approaches with your “client”.  These skills include:

  • Contracting
  • Understanding and dealing with resistance
  • Building relationships
  • Providing feedback from analysis
  • Authentic interactions
  • Managing feedback meetings
  • Internal negotiation

In my experience, the best source of guidance for these skills in our profession remains Peter Block’s Flawless Consulting.

So you better get on that.  I want it now!

Extending Action Mapping for Performance Design

Through her Action Mapping process Cathy Moore has demystified, simplified and put a friendly face on an analysis process that produces lean and effective learning programs with an emphasis on practice and application. The four step analysis process of identifying  business goals (1), desired actions/behaviours (2) and  practice activities (3) before identifying content (4) is much advocated but rarely practiced in instructional design. She also uses a helpful visual mapping method to work through this four step process.

Extending the process to performance design

I used the process (and visual mapping approach) to facilitate a learning requirements session a while back. Worked like a charm. I thought then that the process might be taken a little further and be used to identify gaps in the immediate performance environment known to impede optimal performance and then specify solutions for improvement. Here’s what I’m getting at…

Performance Consulting thought leaders (and hard won experience) tell us that newly developed skills alone, without a supporting environment rarely produces the performance impact we need. If you accept this view, you understand that skills and knowledge are only one factor among many that are needed for performance and that, in fact it’s often the performance environment and not the skills that need adjustment. Geary Rummler organized these critical performance factors within a systems framework and labeled it the Human Performance System (HPS), Thomas Gilbert categorized the factors in his seminal Performance Engineering Matrix which Carl Binder has distilled into his Six Boxes Model. The Robinsons summarized the factors in their Performance Consulting process. Mihaly Csikszentmihalyi has found similar factors in his work on optimal performance and flow states. These authors have developed diagnostic tools based on the performance factors that can be used by teams, managers and performance consultants to identify barriers in the work environment and to design tools, processes, and systems that improve performance.

Borrowing from the above models the critical performance factors might be summarized as follows.

  • Clear Expectations and goals (E)
    Do employees understand the behavior and results expected of them and their team?
  • Supportive Tools, resources and business processes (T)
    Are employees supported by helpful performance aids, process guides and knowledge tools?
  • Timely and meaningful Feedback on results of action (F)
    Is immediate feedback provided to employees and their team (system generated or human) on the quality and accuracy of their actions and output?
  • No Interfering or competing demands (I)
    Is the role free of demands on time and task that interfere with accomplishment of individual and team goals?
  • Consequences aligned to expectations and goals (C)
    Do good things happen when employees accomplish goals and meet expectations or do they happen more for undesired performance?

So how might we extend Cathy’s Action Mapping method to design an optimal performance environment in addition to a learning solution? The first two steps remain the same. 1. Identify the business goal 2. Identify what people need to do to reach the goal. However, at this point the process would shift to the key performance support questions defined above. For each behaviour (or behaviour cluster) the following performance design actions can be taken

  1. Create a vehicle to continuously communicate the key goals, expectations and standards of performance
  2. Design performance aids, automated tools, social learning environments, Communities of practice, and business process adjustments. The appropriate tools and supports will, of course, depend on the type of work.
  3. Create a mechanism for providing continuous information (feedback) to individuals or teams on how they are performing against the desired actions. (I have posted some ideas on this here and here).
  4. Define specific actions for reducing interfering tasks and multitasking and increasing opportunities for focus on task without completing demands.
  5. Revise the balance of consequences in favor of the desired performance.

Using the labels I listed above the extended Action Map might look something like this (Common support actions could support more than one behavior):

Adding Outputs and accomplishments

The approach could be further enhanced by identifying work desired outputs before behaviours/actions (a revised step 2).  This would be especially useful when starting with the analysis of a job rather than a specific business objective. This is important for knowledge work where there may be multiple behavioural paths to the same work output. Carl Binder has labeled this approach the performance chain. The same performance thinking is at the root of both Action Mapping and the Performance Chain approach. You can learn more about performance thinking and the performance chain approach at the Six Boxes web site here.

Implementation

Performance Consulting gets legitimate criticism for sometimes for being too prescriptive and relying external “experts” to implement processes like those above. But there is no reason empowered self-managing team or process improvement groups cannot use the same tools to diagnose and design or influence their own performance environment. A good performance consultant can  facilitate teams through this process.  I learned a while ago from Geary Rummler that good performance consultants can provide both the training artifact requested by the organization and an improved performance environment. The extended Action Mapping method may be a great way to sneak some performance improvement into your training projects.

Evaluating with the Success Case Method

In my last post I mentioned that I prefer the Success Case Method for evaluating learning (and other) interventions to the Kirkpatrick approach. A few readers contacted me asking for information on the method and why I prefer it. Here’s a bit of both.

About the Success Case Method

The method was developed by Robert Brinkerhoff as an alternative (or supplement) to the Kirkpatrick approach and its derivatives. It is very simple and fast (which is part of it’s appeal) and goes something like this:

Step 1. Identify targeted business goals and impact expectations

Step 2. Survey a large representative sample of all participants in a program to identify high impact and low impact cases

Step 3. Analyze the survey data to identify:

  • a small group of successful participants
  • a small group unsuccessful participants

Step 4. Conduct in-depth interviews with the two selected groups to:

  • document the nature and business value of their application of learning
  • identify the performance factors that supported learning application and obstacles that prevented it.

Step 5. Document and disseminate the story

  • report impact
  • applaud successes
  • use data to educate managers and organization

The process produces two key outputs

  • In-depth stories of documented business effect that can be disseminated to a variety of audiences
  • Knowledge of factors that enhance or impede the effect of training on business results. Factors that are associated with successful application of new skills are compared and contrasted with those that impede training.

It answers practical and common questions we have about training and other initiatives:

  • What is really happening? Who’s using what, and how well? Who’s not using things as planned? What’s getting used, and what isn’t? Which people and how many are having success? Which people and how many are not?
  • What results are being achieved? What value, if any, is being realized? What goals are being met? What goals are not? Is the intervention delivering the promised and hoped for results? What unintended results are happening?
  • What is the value of the results? What sort of dollar or other value can be placed on the results? Does the program appear to be worthwhile? Is it producing results worth more than its costs? What is its return on investment? How much more value could it produce if it were working better?
  • How can it be improved? What’s helping? What’s getting in the way? What could be done to get more people to use it? How can everyone be more like those few who are most successful?

Here’s a good Brinkerhoff article from a 2005 issue of Advances in Developing Human Resources on the method. The Success Case Method: A Strategic Evaluation Approach to Increasing the Value and Effect of Training

There are some important differences between Kirkpatrick Based Methods and the Success Case Method. The following table developed by Brinkerhoff differentiates the two approaches.

Why I like it

Here are five reasons:

1. Where Kirkpatrick (and Philips and others) focus on gathering proof of learning effectiveness and performance impact using primarily quantitative and statistical measures, the Success Case Method focuses on gathering compelling evidence of effectiveness and impact through qualitative methods and naturalistic data gathering. Some organizational decisions require hard proof and statistical evidence. In my experience training is not one of them. At best, training decisions are usually judgment calls using best available information at the time. Statistical proof is often overkill and causes  managers  to look at each other in amusement.  All they really need is some good evidence, some examples of where things are going well and where they aren’t. They are happy to trade statistical significance for authentic verification from real employees.

2. We spend a lot of time twisting ourselves in knots trying to isolate the effects of training from other variables that mix with skills to impact performance. Factors such as opportunity to use the skills, how the skills are supported,  consequences of using the skills and others all combine to produce performance impact. Only we are hell bent on separating these factors. Our clients (internal and external) are interested only in the performance improvement. In the end it is irrelevant to them whether it was precisely the training that produced the improvement. They simply would like some confirmation that an intervention improved performance, and when it didn’t how we can modify it and other variables to make it work. Success case method accepts that other factors are at work when it comes to impact on performance and concentrates on the impact of the overall intervention.

3. The approach can be used for any type of intervention designed to improve performance, including training, performance support systems, information solutions, communities of practice, improved feedback systems, informal and semi-structured learning initiatives and social learning initiatives.

4. Success Case Method results are documented and presented as “stories”. We have learned the power of stories for sharing knowledge in recent years. Why not use the same approach to share our evaluation results instead of the dry and weighty tombs of analysis we often produce

5. It’s fast and it’s simple and has a growing track record.

To learn more:

The Success Case Method: Find Out Quickly What’s Working and What’s No

Telling Training’s Story: Evaluation Made Simple, Credible, and Effective

High Impact Learning: Strategies For Leveraging Performance And Business Results From Training Investments

Instructional Design: Science, Art and Craft

I’ve been reading some Henry Mintzberg recently.  His books–Managing and Managers Not MBA’s–both question prevailing thinking on management and leadership and present alternatives for effective management practice and development.  Both books include a model of management as a balancing act between science, art and craft. His argument is that effective management requires all three and an overemphasis on any one results in dysfunction.

I think it also offers some insight to effective Instructional Design.  Much of the recent debate regarding Instructional Design models and practice (see my own view here) seem to revolve around the prescriptive, process based models of ADDIE (and like models) versus  more open constructivist approaches, presumably more relevant for our networked and collaborative work environments.   The arguments tend to get unnecessarily polarized.  The following table is adapted from a similar one Mintzberg created for defined management styles.  I believe it works equally well for for Instructional Design practice.

Most graduate programs in Instructional Design and Educational Technology are squarely in the science column (psychology, human learning, and systems design).  New graduates emerge with a scientific bent seeking order, precise applications and and predictable results from their models and approaches refined in the scientific tradition. We quickly (or perhaps not so quickly) learn from experience (craft) what really works and what doesn’t, and also  that often unexpected creative ideas and insights improves our solutions (art).  Clearly, effective design of learning experiences requires all three.

The diagram below, again adapted from Mintzberg,  shows how these three approaches to learning design might interact and the potential consequences of relying on any one dominant style.  We have all seen examples at the extreme end of each style.   Bringing only an artistic design style to a project may result in a truly novel, creative or  visually stunning result that wows and inspires but does not teach.   Relying on proven learning science often results in dry, uninspired or demotivating instruction that may result in learning, but can be mind-numbing.  Craft, uninformed by art or science, and often from untrained instructional designers working from common sense rarely ventures beyond personal experience, with hit and miss results at best.


Combination of the approaches can also be less than optimal for producing effective learning experiences.  Art and craft together without the systematic analysis of science can lead to disorganized learning designs.  Craft and science without the creative vision of art can lead to dispirited design, careful and connected but lacking flare.  Learning design based on art with science is creative and systematic, but without the experience of craft can produce, impersonal and disconnected learning.

Effective learning designs then,  happen most when that elusive combination of art, science and craft come together. Where the three approaches coexist, through a skillfully assembled learning team the result is usually effective, motivational learning grounded in the realities of the organization.  I suppose a tilt toward one or two would make sense for certain subjects, skills or audiences.   For management,  Mintzberg says too much balance of the three may also be dysfunctional since it lacks any distinctive style at all!  Perhaps, a good lesson for instructional design as well.

Deliberate Practice, Learning and Expertise

I’m back from some vacation where I read Malcolm Gladwell’s Outliers on the beach at our cottage (along with some very funny David Sedaris).

Even if you haven’t read Outliers yet you probably know that it sets out to dispel myths that intelligence or innate ability are the primary predictors of success.  Instead,  Gladwell summarizes research and provides examples to show that it is hours and hours of practice (10,000 to be exact) and a “practical intelligence” (similar in concept to emotional intelligence) acquired through experience that are the real determinants of success.

Gladwell covers similar territory (and draws on the same research) as Geoff Colvin’s Talent is Overrated: What Really Separates world Class Performers from Everybody Else, another excellent book that elaborates on an article Colvin wrote for Fortune magazine a few years ago: “What it Takes To Be Great”.

Both books debunk the assumption that “gifted” skill and great performance comes from innate talent, personal traits or hard wired competencies and ability.  The research Galdwell and Colvin draw on is impressive.  Both point to the extensive work of K. Anders Ericsson at Florida State University.  Ericsson has conducted years of rock solid research on the role of “deliberate practice” in the acquisition of expert performance.  If you like to seek out source research as I do, then you’ll enjoy Ericsson’s (and others) impressive work that has been collected in the Cambridge Handbook of Expertise and Expert Performance. Here is an earlier (and less hefty) review on some of the same research: “Deliberate practice” in the acquisition of expert performance.

At the core of these works is the concept of “deliberate practice” over longs periods of time (up to ten years).  While impossible to boil down the theory into a few points, here it is…uh…boiled down into a few points.   Highly skilled performance in all aspects of life and work can be developed by the rough equivalent of 10,000  hours (10 years or so) of increasing specific, targeted and mindful practice in a domain of expertise. The practice must be:

  • Specific & technique-oriented
  • Self regulated
  • Involve high-repetition
  • Paired with immediate feedback on results
  • Isn’t necessarily “fun”, (in fact can be grueling hard work)

“Deliberate practice is activity designed specifically to improve performance, often with a teacher’s help; it can be repeated a lot; feedback on results is continuously available; it’s highly demanding mentally, whether the activity is purely intellectual, such as chess or business-related activities, or heavily physical, such as sports; and it isn’t much fun.
From: Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else .

Where Gladwell and Colvin focus on how an individual (you!) can use deliberate practice to improve and achieve the success you want,  Learning Professionals should be thinking about how to use the ideas to help others develop and grow the expertise needed by the organizations we support.  Ericsson has something to say here as well, having recently published a new book on how to design learning environments to develop and measure expertise– Development of Professional Expertise: Toward Measurement of Expert Performance and Design of Optimal Learning Environments.  In a time when learning/instructional design has become generalized and de-professionalized to the point of non-existence, it’s refreshing to see a serious treatment that moves the profession forward.

Using “Deliberate Practice” to Improve Workplace Performance

Here are 10 ideas that just scratch the surface on how Learning Professionals can use “deliberate practice” to improve workplace skill and performance.

  1. Move from “mastery learning” to designing practice with feedback over longer periods of time (from learning events to a learning process). Deliberate Practice differs from the concept of ‘Mastery Learning” at the heart of much instructional design.  Mastery learning assumes a skill is perfected (or at least brought to a defined standard) in a fairly short period of time often within the scope of a single course. The complex professional skills of modern knowledge workers and managers demand a stronger focus on long term practice and feedback and building learning around long term objectives.
  2. Develop the person. Time, practice and individualized feedback imply a long term focus on individuals rather than on jobs or roles.
  3. Informal learning efforts like action learning, coaching and are cognitive apprenticeships are critical but they must be focused on practice and immediate feedback and extend over long periods of time.
  4. Relevant, frequent and varied practice must be the dominant and most important element in all formal training programs.
  5. Practice opportunities must extend far beyond initial training programs, to allow people to hone their skills through experimentation with immediate feedback.
  6. Create practice sandboxes and simulation centres for key organizational skills where people can practice their skills and experience immediate feedback in safe environment.
  7. Design visual feedback directly into jobs so professional can immediately see the results of their work.  In this way working IS deliberate practice.
  8. Turn training events into the first step of a learning journey that will continue to provide opportunities to practice and refine skills throughout a career.
  9. Identify the interests and strengths of people nurture them through opportunities for deliberate practice. Provide resources and support that encourage early effort and achievement.
  10. Ensure social media environments provide opportunities for coaching and mindful reflection on performance.