Mapping Informal and Formal Learning Strategies to Real Work

During the Q&A at a recent conference session on Social Learning, a retail industry attendee asked “I have to train 300 store level associates in new product knowledge in the next three months.  Is social learning really what I want?”  What would your answer be?

I advocate informal and social learning when appropriate and get as excited about them as you likely do, but it’s not a panacea for all our learning woes.  The current zeal around social learning solutions can distract from real performance needs (we’ve been distracted before).  Social learning gets positioned as the enlightened and “correct” solution for the modern workplace. Formal learning is old, tired, and reluctantly tolerated for the vestiges of the traditional, mechanistic workplace.

But, set aside your biases one way or the other for the moment and simply think of the roles and functions you support in your organization.  It will vary by industry of course, but your list is going to be some subset of the following

  • Marketing
  • Sales
  • Product Development
  • Manufacturing
  • Operations
  • Administration
  • Service Delivery
  • Order Fulfillment
  • Information Technology
  • Procurement
  • Management and Leadership

Now think of the jobs or roles within those functions…the engineers, technicians, account executives, managers, IT specialists, health care workers, service specialists, and operations staff you support.  Do they demand a singular approach to developing skill and capability necessary for their job?  Is social learning or traditional skills training the most appropriate for all job types?  I hope your answer is no.

Consultants have been telling us for years that traditional, mechanistic organizations are disappearing, and with them linear and routine work.  There is no doubt that is the economic direction, but look around you…at the auto assembly lines, big box retail, supermarkets, call centres, healthcare technicians, administrative clerks in government, insurance, finance and elsewhere. Think about the jobs you support and you’ll see many examples of traditional work where social media based learning will simply not be feasible to quickly develop skills.

Task variety and standardization: Routine vs. knowledge work

Instead of over generalizing the value of any solution it’s best to truly understand the skill and knowledge requirements of the jobs, roles or initiatives you support.  I’m not talking about task or needs analysis (through both are valuable tools).  Instead go up one notch higher and categorize the types of “work” you support in your organization.  Almost all work, indeed entire organizations and industries, vary on a continuum of two broad factors: task variety and task standardization.

An approach for categorizing jobs, roles and work environments

In between these ends of this spectrum is work that combines standardization and task variety to different degrees. The following framework provides a classification tool to place work types, jobs and roles. It’s an adaptation of the work of Yale Organizational Sociologist Charles Perrow.  Jeffrey Liker and David Meier used a different variation of this model in Toyota Talent.

Work Types, Task Standarization and Task Variety

Routine work

Routine work is highly standardized with little task variety. Job fundamentals need to be learned quickly and performed to company or industry defined standards. There is little room for variation and the skills that need to be learned are narrow and focused. Progressive workplaces will also involve workers in problems solving and continuous process improvement where experience will result in tacit knowledge in problem recognition and problem solving than can be shared with others through informal vehicles.

Sample jobs and roles:

  • Assembly line workers
  • Bank teller
  • Data entry clerk
  • Oil and gas well drillers
  • Machine operators
  • Fast food server

Learning approach:

  • Formal structured, job specific skills training, performance support tools that enable standardized procedures

Technician work

The work of the technician is less standardized and includes more variety in the tasks and skills required by the role. Work still has many defined procedures and processes. However, they are more complex and often based on sophisticated systems and technology. The sequence can vary depending on the situation so employees have more autonomy in selecting appropriate procedures. There’s also greater variety in the procedures and tasks to be completed and as a result the learning programs need to consider problem solving, decision making and continuous improvement. Tacit knowledge will be needed to solve real technical problems that arise and there is often a service element in technician work that can benefit from informal approaches to learning. Performance supports system are a natural fit for technician oriented work as is mobile learning for customer support technicians often working at customer locations.

Sample jobs and roles:

  • Lab analyst
  • Quality Control Specialists
  • Radiation technologist
  • Maintenance workers
  • Technical support specialists
  • Most “trade” occupations

Learning approach:

  • Formal structured learning for required procedures, performance support systems. Informal learning and apprenticeship approaches for building “know-how” and problem solving

Craft work

Craft oriented work introduces even greater amounts of variety in tasks, skills and knowledge, but retains significant amounts of standardization for optimal performance. While there is a definable number of tasks, each situation faced by employees is somewhat different, and each requires creative and slightly unique solutions. Over time patterns in problems and solutions emerge for individual employees and this becomes valuable experience (tacit knowledge) that they can pass on to novice employees through informal approaches. Basic skills and procedures are most efficiently taught through formal methods but the most critical parts of the job are learning through years of experience facing multiple situations. Management is more flexible and with fewer rules and formal policies. Teamwork and communications are paramount.

Sample jobs and roles:

  • Nurse
  • Sales professional
  • Call centre agent
  • Graphic designer
  • Air traffic controller
  • First level supervisor
  • Insurance administrator

Learning approach:

Formal learning for foundation procedures and skills. Informal learning and deep work experience and mentoring models for tacit knowledge.

Knowledge work

Finally, knowledge work involves little task standardization (although there is always some) and a great amount of task variety requiring a wide range of skills, knowledge and collaboration. Professionals move from task to task and each situation is unique calling for spontaneous thinking, reasoning and decision making. Knowledge workers must adapt to new situations, assess complex data and make complex decisions.  They also need refined people skills.  The most critical aspects of what experts and knowledge workers do (after formal education) can only be learned on the job over time through experience, mentors and knowledge sharing with other professionals.

Sample jobs and roles:

  • Professional engineers
  • Middle and senior management
  • Professions: Law, Medicine, Architecture, Scientists, Professors etc.
  • Software developers
  • Creative director

Learning approach:

Professional education, extensive job experience on a variety of situations and work assignments, action learning, mentorship, Communities of Practice.

Balanced approaches

Of course most work requires a combination of knowledge work and routine work. These characteristics of jobs and work environments call for different approaches to training and development. There is a continuum of learning solutions that range from formal to non-formal to informal. I’ve posted my view on this continuum in the past (Leveraging the full learning continuum).

Avoid over generalizing your solutions at all costs. Participate in social media and learn it’s value to develop tacit knowledge and create new knowledge in your organization but don’t assume it is the “correct” solution for all your audiences. Start with the functions you serve. Truly understand work that is done by the jobs and roles within those groups and the skills necessary for them to be successful. Only then create a solution that will meet those needs.

Making Informal Learning Assets Work

Seeking ways to leverage new social media environments, learning departments are discovering ways to sneak a little formal learning through the informal learning back door. Some of our clients for example, are looking to load up their social learning environments with small bits of learning content related to business goals. The notion being that these informal learning assets will live or die on the strength of their connection to employee performance need. Informal learning assets (or perhaps more accurately formal learning assets designed for informal consumption) are small segments of learning media such as videos, podcasts, documents, animations, short interactive pieces, images, performance guides, job aids, process descriptions, anything with a learning intention that can be posted to a social media environment. They can be created by anyone, from learning designers, to managers and employees and team members.

Survival of the fittest

The strategy creates a kind of Darwinian free-for-all of digital learning resources. Those of best fit to real learning and performance needs will get viewed, liked, shared, discussed and commented on more than those that don’t quite measure up. The best become internal learning memes that do their viral tour of duty and those that don’t hit the mark fall off the social radar, never to produce their learning offspring to see another day. Or so the theory goes. It’s an interesting strategy with loads of implications for designers, suppliers and users of learning content. The idea (I hesitate to call it a trend) is leading some organizations and training suppliers to deconstruct their existing learning programs into learning bits and pieces for populating internal social media environments such as they are.

Making informal learning assets work

I like the idea of infusing communities with digital learning assets but there are a few cautions to watch as we enter this new path. Foremost is the profusion of “information” oriented learning assets at the expense of the practice, application  and reflection that we know is at the heart of real learning and improvement. Information based assets no matter how novel or entertaining we make them are not enough. To bastardize an old Magerism, if telling alone resulted in learning we’d all be so smart we could hardly stand it.

There are ways to structure and design informal learning assets to maintain the best of what we’ve learned from formal design and bring them into the informal learning world. A model we’ve been experimenting with connects formal, informal and social learning, based on five learning essentials (you’ll recognize them if you are familiar with David Merrill’s First Principles or Bernice McCarthy’s 4Mat). Effective learning requires solving authentic problems and tasks, connecting new knowledge with existing mental models, uses powerful ways of presenting and demonstrating new knowledge, provides many and varied opportunities to practice new skills with coaching and reflection, and finally guides the the application to new situations on the job.

Too many informal learning assets target only the “key knowledge” requirement (#3), without any connection to the remaining four learning essentials. Well designed learning programs will account for each of the essentials, but there is no reason they all have to bundled up together in a tidy formal learning bow. In fact, the essence of good informal learning is that the guided application essential (#4) takes place on the job with feedback and coaching from colleagues or mentors inside social media environment or face to face. Forums and discussions are excellent ways to gently guide application. Job aids and performance support systems are effective vehicles for building skills into workflow (#5). Real business problems and tasks (#1) can be used instead of artificial cases. My point is that with care each of these other essentials can be developed as informal learning assets as effectively as a good information driven asset.

This view can also serve as a guide when deconstructing classroom programs for  conversion to social media environments. Instead of retaining only the key knowledge from your programs, look for effective ways to create assets that support the other learning essentials as well.

Learning assets associated with a specific knowledge domain, role or learning objective can be connected through tagging, linking or even a good old fashioned learning path.

Once loaded into social media environments users and community members will begin using them to improve their performance and manage their own knowledge. Not only will they consume the learning assets, they will create their own and in doing so create new and emergent knowledge. As new ideas emerge they will evolve to standard practice and can feed the development of new or revised formal learning programs.

This connection between formal, informal and Social learning might look something like the following:

Extending Action Mapping for Performance Design

Through her Action Mapping process Cathy Moore has demystified, simplified and put a friendly face on an analysis process that produces lean and effective learning programs with an emphasis on practice and application. The four step analysis process of identifying  business goals (1), desired actions/behaviours (2) and  practice activities (3) before identifying content (4) is much advocated but rarely practiced in instructional design. She also uses a helpful visual mapping method to work through this four step process.

Extending the process to performance design

I used the process (and visual mapping approach) to facilitate a learning requirements session a while back. Worked like a charm. I thought then that the process might be taken a little further and be used to identify gaps in the immediate performance environment known to impede optimal performance and then specify solutions for improvement. Here’s what I’m getting at…

Performance Consulting thought leaders (and hard won experience) tell us that newly developed skills alone, without a supporting environment rarely produces the performance impact we need. If you accept this view, you understand that skills and knowledge are only one factor among many that are needed for performance and that, in fact it’s often the performance environment and not the skills that need adjustment. Geary Rummler organized these critical performance factors within a systems framework and labeled it the Human Performance System (HPS), Thomas Gilbert categorized the factors in his seminal Performance Engineering Matrix which Carl Binder has distilled into his Six Boxes Model. The Robinsons summarized the factors in their Performance Consulting process. Mihaly Csikszentmihalyi has found similar factors in his work on optimal performance and flow states. These authors have developed diagnostic tools based on the performance factors that can be used by teams, managers and performance consultants to identify barriers in the work environment and to design tools, processes, and systems that improve performance.

Borrowing from the above models the critical performance factors might be summarized as follows.

  • Clear Expectations and goals (E)
    Do employees understand the behavior and results expected of them and their team?
  • Supportive Tools, resources and business processes (T)
    Are employees supported by helpful performance aids, process guides and knowledge tools?
  • Timely and meaningful Feedback on results of action (F)
    Is immediate feedback provided to employees and their team (system generated or human) on the quality and accuracy of their actions and output?
  • No Interfering or competing demands (I)
    Is the role free of demands on time and task that interfere with accomplishment of individual and team goals?
  • Consequences aligned to expectations and goals (C)
    Do good things happen when employees accomplish goals and meet expectations or do they happen more for undesired performance?

So how might we extend Cathy’s Action Mapping method to design an optimal performance environment in addition to a learning solution? The first two steps remain the same. 1. Identify the business goal 2. Identify what people need to do to reach the goal. However, at this point the process would shift to the key performance support questions defined above. For each behaviour (or behaviour cluster) the following performance design actions can be taken

  1. Create a vehicle to continuously communicate the key goals, expectations and standards of performance
  2. Design performance aids, automated tools, social learning environments, Communities of practice, and business process adjustments. The appropriate tools and supports will, of course, depend on the type of work.
  3. Create a mechanism for providing continuous information (feedback) to individuals or teams on how they are performing against the desired actions. (I have posted some ideas on this here and here).
  4. Define specific actions for reducing interfering tasks and multitasking and increasing opportunities for focus on task without completing demands.
  5. Revise the balance of consequences in favor of the desired performance.

Using the labels I listed above the extended Action Map might look something like this (Common support actions could support more than one behavior):

Adding Outputs and accomplishments

The approach could be further enhanced by identifying work desired outputs before behaviours/actions (a revised step 2).  This would be especially useful when starting with the analysis of a job rather than a specific business objective. This is important for knowledge work where there may be multiple behavioural paths to the same work output. Carl Binder has labeled this approach the performance chain. The same performance thinking is at the root of both Action Mapping and the Performance Chain approach. You can learn more about performance thinking and the performance chain approach at the Six Boxes web site here.

Implementation

Performance Consulting gets legitimate criticism for sometimes for being too prescriptive and relying external “experts” to implement processes like those above. But there is no reason empowered self-managing team or process improvement groups cannot use the same tools to diagnose and design or influence their own performance environment. A good performance consultant can  facilitate teams through this process.  I learned a while ago from Geary Rummler that good performance consultants can provide both the training artifact requested by the organization and an improved performance environment. The extended Action Mapping method may be a great way to sneak some performance improvement into your training projects.

The 30 Second MBA

I came across this interesting resource recently–The 30 second MBA.

It’s a venture of Fast Company Magazine.  Leaders and entrepreneurs from a variety of industries are asked to describe their approach to various leadership problems and topics in 30 seconds or less (ticking clock and all).  The site describes their mission like this:

The great lament of any reporter is what to do with the jewels that routinely get left on the cutting room floor after a really great interview. Enter the 30-Second MBA, an ongoing video “curriculum” of really good advice from the trenches, directly from people who are making business happen.

The “professors” providing this curriculum include the likes of Mark Zuckerberg (CEO, Facebook) Alan Mulally (CEO, Ford), Padmasree Warrior (CTO, Cisco) Vivian Schiller (President, NPR) and a host of others.  They each take on topics and business problems such as leadership, teamwork, decision making, customer relationships, growth, communication, crisis management and more–all within the 30 second video format.

The site is interesting from a few perspectives.  You can take or leave the messages provided, but the site demonstrates a model that can easily be mimicked inside organizations for capturing and sharing knowledge.  The site also includes community features to discuss the perspectives from the business leaders and post post your own perspectives.  Knowledge captured like this can be used as informal learning assets to support a management development approaches like the one I described in my recent post Management Development Redux.  Learning assets like this are not the complete picture however, and should never be used as a replacement for a targeted management development program.   They should used as learning resources to support solving unique business challenges through discussion and reflection in action learning teams, communities of practice or other collaborative approaches.

Sites and management development services like this are popping up more frequently.  They’ll likely start to change the shape of management development inside organizations, both as suppliers of content for internal communities and simply as models for the direction internal management development might take.

Evaluating Training and Learning Circa 2011

In a recent article in CLO magazine Dan Pontefract questioned the value of traditional training evaluation and the Kirkpatrick approach in particular (article re-posted here).  The article raised the ire of the Kirkpatrick organization and Dan responded in a follow-up post .  Others had observations on the post  (see  Don Clark and Harold Jarche.) I’ve been involved in many evaluation efforts over the years, both useful and ill-advised, and have some thoughts to impose on you.

To summarize the positions I’ll paraphrase Dan and Wendy Kirkpatrick  (probably incorrectly but this debate happens so often I’m using Dan and Wendy more as archetypal voices for both sides of the argument)

Dan: Learning is a continuous, connected and collaborative process.  It is part formal, part informal and part social.  Current evaluations methods are dated, focused only on formal learning events, and need to be tossed.   (He doesn’t say it but I think he would place less importance on evaluation in the growing world of social learning)

Wendy (Kirkpatrick): Formal training is the foundation of performance and results.  It must be evaluated in measurable terms. Clearly defined results will increase the likelihood that resources will be most effectively and efficiently used to accomplish the mission.  (She doesn’t say it but I think she would suggest social learning, when considered at all, is simply in a supporting role to formal training.)

On the surface it sounds like they couldn’t be more polarized, like much of the current debate regarding formal vs. informal learning. Here are some thoughts that might help find some common ground (which, I’ll admit, isn’t as much fun as continuing to polarize the issue).

Confusing Training and Learning muddies the purpose of evaluation

In the last 10 years or so we’ve moved away from the language of training and instruction, with it’s prescriptive and objectivist underpinnings (boo!) to the softer language of learning, most recently of the social variety (yea!).  Most “training” departments changed their moniker to “learning” departments to imply all the good stuff, but offer essentially the same set of (mostly formal) learning services.  Learning is the new training and this has confused our views of evaluation.

Learning (as I’m sure both Dan and Wendy would agree) truly is something we do every day, consciously, unconsciously, forever and ever, amen.  We are hard wired to learn by  adopting a goal, taking actions to accomplish the goal (making a decision, executing a task, etc) and then making adjustments based the results of our actions.  We refine these actions over time with further feedback until we are skilled or expert in a domain. This is learning.

Training is our invention to speed up this learning process by taking advantage of what has already been learned and freeing people from repeating the errors of others.   In business fast is good.  Training, at least in theory, is the fast route to skilled performance versus the slow route of personal trial and error.  It works very well for some tasks (routine) and less well for others (knowledge work and management development).   Ironically, by stealing training from the hands of managers and from early mentor/apprenticeship approaches we may have stolen its soul (but I digress).

In any case, like it or not, in an organizational setting, training and learning are both means to an end–individual and organizational performance.  And performance provides a better filter to make decisions about evaluating than a focus on training/learning.

Should we evaluate training?

If it’s worth the considerable cost to create and deliver training programs it’s worth knowing if they are working,  even (maybe especially) when the answer is no.  With growing emphasis on accountability it hard to justify anything else.  Any business unit, Training/Learning included, needs to be accountable for effective and efficient delivery of its services.

The Kirkpatrick Framework (among others) provides a rational process for doing that but we get overzealous in the application of the four levels.  In the end, it’s only the last level that really matters (performance impact) and that is the level we least persue.   And I don’t know about you, but I’ve rarely been asked for proof that a program is working.  Senior management operates on judgment and best available data for decision making far more than any rigorous analysis.  When we can point to evidence and linkages in performance terms that our training programs are working that’s all we usually need.  I prefer Robert Brinkerhoff’s Success Case Method for  identifying evidenceof training success (vs. statistical proof ) and for using the results of the evaluation for continuous improvement.

Unlike Dan, I’m happy to hear the Kirkpatrick crew has updated their approach to be used in reverse as a planning tool.  It’s not a new innovation however. It’s been a foundation of good training planning for years.  It puts the emphasis on proactively forecasting the effectiveness of a training initiative rather than evaluating it in the rear view mirror.

Should we evaluate social learning?

It gets slippery here, but stay with me.  If we define learning as I did above,  and as as many people do when discussing social learning, then I think it’s folly to even attempt Kirkpatrick style evaluation.  When learning is integrated with work, lubricated by the conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements.   Learning in the broadest sense is simply the human activity carried out in the achievement of performance goals.  Improved performance is the best evidence of team learning.  This chart from Marvin Weisbord’s Productive Workplaces: Organizing and Managing for Dignity, Meaning and Community illustrates the idea nicely:


In his post Dan suggests some measures for social learning:

“Learning professionals would be well advised to build social learning metrics into the new RPE model through qualitative and quantitative measures addressing traits including total time duration on sites, accesses, contributions, network depth and breadth, ratings, rankings and other social community adjudication opportunities. Other informal and formal learning metrics can also be added to the model including a perpetual 360 degree, open feedback mechanism”

Interesting as it may be to collect this information, they are all measures of activity reminiscent of the type of detailed activity data gathered by Learning Management Systems.  Better I think to implement social learning interventions and observe how it impacts standard business results.  Social Learning is simply natural human behavior that we happen to have a very intense microscope on at the moment.  To evaluate and measure it would suck dry it’s very human elements.

Evaluation should inform decision-making

Evaluation is meant to inform decisions. We should measure what we can and use it in ways that it doesn’t bias what we can’t.   The Kirkpatrick approach (and others that have expanded on it over the years), have provided a decent framework to think about what we should expect from training and other informal learning interventions.

However, myopic and overly rigorous measurement can drive out judgment and cause us to start measuring trees and forget about the forest.   Thinking about organizational learning as a continuum of possible interventions rather that the abstract dichotomy between formal and informal  learning will help us better decide appropriate evaluation strategies matched to the situation.  Whew! Maybe we need to evaluate the effectiveness of evaluation 🙂

Leadership Development in a Learning 2.0 World

Last week  I presented a session titled Leadership Development in a Learning 2.0 World at the CSTD 2010 National Symposium. Here is the description of the session from the conference program:

Leadership Development in a Learning 2.0 World

Developing effective leaders and managers is an increasingly important task for the learning function. Leadership development has been slow to adopt eLearning strategies but recent developments in web 2.0 technologies, along with changing perspectives on workplace learning are changing that. The social learning drivers behind learning 2.0 are a natural fit for the learning needs of managers and leaders and provide the learning function with an opportunity for real innovation in leadership development practices. This session will provide an overview of the key concepts, strategies and tools to help transform leadership development practices for the emerging learning 2.0 world.
Learning Outcomes:
  • Contrast current leadership development practices with learning 2.0 driven practices
  • Describe benefits of learning 2.0 for transforming leadership and management development
  • Describe a model of leadership development driven by learning 2.0 principles
  • Envision a future Leadership Development program for your organization on a by a learning 2.0 foundation
  • Define strategies for integrating learning 2.0 concepts into current leadership development programs

I promised the participants in my session that I would post the slides  on this blog.  Thank you all for attending!  You were a great audience.  Please leave a comment to say hello or post any thoughts you had on the session.

You can view the presentation below or download it directly by clicking this  link:  Leadership Development in Learning 2.0 World

Dan Pontefract was originally scheduled to present with me but he was not able to make it.   For those of you interested in in Dan’s very active and always interesting blog Training Wreck you can find it here.

Conference attendees braved the snow (yes, snow!) in Calgary to participate in some very interesting sessions.   As always, it was a pleasure to connect with old colleagues and meet many new people with interesting perspectives on the profession.  Thanks to the CSTD organizing team!

Web 2.0 Helping to Generate Measurable Business Value

In an earlier post (For Web 2.0 What’s in the Workflow is What Gets Used), I refered to some ongoing research McKinsey&Company is doing in web 2.0 adoption in the workplace– how and where it is being used and the impact it is having on business.

The research is based an an annual survey of 1700 companies from across the globe in a range of industries and functional areas and has been ongoing now for about three years running.  The Mckinsey Quarterly recently summarized results in an interactive visual chart and as a full article in the McKinsey Quarterly titled How companies are benefiting from Web 2.0: McKinsey Global Survey Results (The article is free but you have to join the free membership to see it in full).

The following chart from the interactive feature summarizes how web 2.0 technologies are being used for some internal purposes including managing knowledge and training.   Internal blogs and wikis are being used significantly for Managing Knowledge. For Training uses the highest categories are Podcasts and Video Sharing (unfortunately the most  presentation oriented technologies of the bunch).   Social Networking is being used extensively for fostering collaboration and identifying and recruiting talent.

McKinsey_chart_knowledge

Click to access the McKinsey interactive chart

If you go to the interactive feature be sure to listen to the “about this research” audio snippet.  It provides a brief summary of the research and findings across three years.   Some conclusions McKinsey draws:

  • an increasing number companies are adopting web 2.0 technologies
  • more companies will start to use them for wider purposes including customers, internal employees and suppliers
  • uses will continue to evolve and get better at deriving business value

the striking result is that 2/3 of the companies are deriving measurable business value.

McKinsey summarizes:

“This year’s survey turned up strong evidence that these advantages are translating into measurable business gains.  When we asked respondents about the business benefits their companies have gained as a result of using Web 2.0 technologies, they most often report greater ability to share ideas; improved access to knowledge experts; and reduced costs of communications, travel, and operations.  Many respondents also say Web 2.0 tools have decreased the time to market for products and have had the effect of improving employee satisfaction”.

ADDIE is dead! Long live ADDIE!

horse
I’m at risk of flogging a very dead horse here, but some recent posts from Ellen Wagner (What is it about ADDIE that makes people so cranky?) and Donald Clark (The evolving dynamics of ISD and Extending ISD through Plug and Play) got me thinking about instructional design process and ADDIE in particular (please  don’t run away!).

Ellen’s post focused on how Learning Designers on a twitter discussion got  “cranky” at the first mention of the ADDIE process (Analysis, Design, Development, Implementation and Evaluation).  On the Twitter #Lrnchat session  participants had a gag response to the to the mere mention of ADDIE (sound familiar?).  Don responded with some great comments on how ISD (ADDIE) has evolved and adapted.

Much of my career has been involved in applying ADDIE in some form or other and I’ve landed on a conflicted LOVE/HATE relationship with it to which you, lucky reader, will now be subjected.addie

 

HATE (Phase A, Step 3.2.6)

Throughout the 90’s many Instructional Designers and e-Learning Developers (me included) grew disgruntled with ADDIE (and its parent process Instructional Systems Design—ISD) as training struggled to keep up with business demands for speed and quality and as we  observed process innovations in software and product development field (Rapid Application Development, Iterative prototyping etc).

In 2001 that frustration was given voice in the seminal article “The Attack on ISD” by Jack Gordon and Ron Zemke in Training Magazine (see here for a follow-up)

The article cited four main concerns:

  • ISD is too slow and clumsy to meet today’s training challenges
  • There’s no “there” there. (It aspires to be a science but fails on many fronts)
  • Used as directed, it produces bad solutions
  • It clings to the wrong world view

I have memories of early projects, driven by mindless adherence to ISD, where I learned the hard way, the truth in each of these assertions.  As an example of what not to do and a guard against blowing my brains out in future projects,  for years I have kept an old Gagne style “instructional objective” from an early military project that would make your eyes burn.

Early ISD/ADDIE aspired to be an engineering model.  Follow it precisely and you would produce repeatable outcomes.  The engineering model assumes a “one best way” and the one best way of the time was grounded in the science of behavioral psychology and general systems theory.  The “one best way” thinking appealed to the bureaucratic style of the times but it couldn’t be more of an anathema to the current crop of learning designers, especially those focused on more social and constructivist approaches to learning.  And they are right.

Another criticism of ADDIE I have parallels Ellen’s comments.  Adherents and crankites alike view ADDIE as an “instructional design” methodology when in fact it should be viewed more as a project management process for learning projects.  Viewing  Instructional Design as synonymous with ADDIE does both a disservice.  There is loads of ID going on inside ADDIE but it is primarily in the Design phase of the process, and it can be much more creative than the original model prescribes.

In the end, the Achilles heel of formal ISD/ADDIE rests in its prescriptive posture and foundation in behavioural psychology.  Behavioural psychology and performance technology–its extension in the workplace–have added greatly to our understanding how to improve human learning at work, but we have learned much since then, and technology has provided tools to both designers and learners that profoundly change the need for a process like ADDIE.

Of course the ADDIE process was (and is) not unique to the learning design profession.  For many years the five broad phases of ADDIE were the foundation for the design of most systems.  Software engineering, product development, interactive/multimedia development are all based on some variation of the model.   Most however have evolved from the linear “waterfall” approach of early models (can’t start the next phase until the previous has been done and approved) to iterative design cycles based on rapid prototyping, customer participation in the process and loads of feedback loops built into the process.  And learning/e-learning is no different.  It has evolved and continues to evolve to meet the needs of the marketplace. Much of the current gag reaction to ADDIE, like that experienced by Ellen, is based on the old waterfall-linear approach and the assumed instructivist nature of the model.  And again the gag is entirely valid.

However, if you can break free from the history, preconceptions and robotic application of ADDIE, you may find room for something approaching…

LOVE (Phase B, Step 2.3.7)

I can’t say I ever use ADDIE in its purest form any longer.  For e-learning and performance applications, I prefer processes with iterative design and development cycles that are usually a variation of rapid application development process like this one from DSDM.

dsdm

Or for an example specific to e-learning,  this process from Cyber Media Creations nicely visualizes the iterative approach:

Or for the Michael Allen fans out there, his Rapid Development approach described in Creating Successful e-Learning is very good.  There is a respectful chapter in the book on the ADDIE limitations and how his system evolved from it.

But at the heart of all these processes are the familiar phases of analysis, design, development, implementation, and evaluation,  albeit cycling through them many times along the way.

For me, ADDIE has become a useful heuristic,  not even a process really, but a framework for thinking,  coaching instructional designers,  and managing learning and e-learning projects.  Many e-learning designers these days are not formally trained in Instructional Design and initially think of it as instructional “writing” more than the holistic and systemic approach at the heart of ADDIE.   Likewise, customers and subject matter experts are much easier to work with once they understand the broad project process that ADDIE represents.  For these two purposes alone I am thankful for ADDIE as a framework.  ADDIE has staying power because of its simplicity.  Purists will say it has been watered down too much but in many ways that’s what keeps it alive.

ADDIE phases are also a useful way to think about organization design and structure of a learning function.  They are the major processes that need to be managed and measured by most learning functions.  Just think of the functionality of most LMS systems have added since their inception.

In the end, ADDIE (and its more current modifications) is probably most valuable because it makes the work of learning design visible. This is an essential feature of productive knowledge work of all kinds.   Almost every learning/training group uses an ADDIE as a start point to design a customized process that can be communicated,  executed,  measured and repeated with some level of consistency.  Equally important in knowledge work is the discipline of continually improving processes and breaking through to better ways of working.  This has resulted in the many innovations and improvement to the ADDIE process since its inception.

SUMMATIVE EVALUATION (Phase E, Step 5.2.3)

I’ve come to believe that the power of ADDIE/ISD lies in the mind and artful hands of the user.  In my experience, Rapid Application Development processes can become just as rigid and prescriptive under the watch of inflexible and bureaucratic leaders as ADDIE did.

There’s an intellectual fashion and political correctness at work in some of the outright rejection of ADDIE.  It’s just not cool to associate with the stodgy old process.  Add Web 2.0, informal and social learning to the mix and some will argue we shouldn’t be designing anything.

For the organizations I work with, there is no end on the horizon to formal learning (adjustments in volume and quality would be nice!).  Formal learning will always require intelligent authentic learning design, and a process to make it happen as quickly and effectively as possible.

Simulation and Immersive Learning

Here’s a nice example I stumbled on this week that illustrates the transition that training needs to make.

A few years ago the UPS driver training unit had a mini-revolt on its hands from younger drivers who were unhappy with the long traditional classroom-based training program required for new drivers.  The program was experiencing increasingly higher failure rates and the number of tasks that had to be learned was becoming too much for classroom delivery.  Peggy Emmart, corporate schools coordinator of UPS corporate training and development department commented “while in the early ’90s our DSPs (drivers) may have needed to concentrate on eight key tasks each day, they now routinely perform 30 to 40 major tasks within the same time frame.”

UPS responded by completely overhauling the driver training program into a simulation and immersion based experience called UPS Integrad.  It included a training facility that incorporated a mix of e-learning, simulations, virtual learning, and immersive learn by doing.

Here is a video feature from ABC news on the program. Click the image to take you to the video. There is a short ad first–be patient (sorry I couldn’t embed it).

UPS Integrad ABC News Video profile (click to link)

UPS Integrad Video profile (click to link)

Results

The Integrad program has “exceeded expectations” in all three of the program’s primary goal areas, which include enhanced DSP safety, decreased new driver turnover, and accelerated time to proficiency.

“It wasn’t about video games, it was about providing hands-on application and allowing trainees to learn by doing in a way that connects unambiguously with their jobs”.

When UPS originally started the re-design effort they thought the answer to training younger workers was going to be video game-type training.  Through additional research, they learned it wasn’t about video games, it was about “providing hands-on application and allowing trainees to learn by doing in a way that connects unambiguously with their jobs”.  I think this is a useful caution to e-learning designers moving down the path video game style instruction.

Here’s an article that describes the program in more detail:  UPS Moves Driver Training From the Classroom to the Simulator

But is it appropriate for knowledge workers?

The UPS program is an example of mostly physical or psychomotor learning,  but the lessons hold true for knowledge work as well.  For managers to learn “problem solving and decision making” they need to make decisions and solve real work problems first in a simulated setting and then in real work context with feedback and coaching.   New consultants need to consult; learning designers need to design learning, engineers need to design and test solutions all within safe, feedback rich, immersive work contexts.

As UPS summarized so simply, “The point of all this hands-on instruction is to simulate-as closely as possible-exactly what it’s like to be a…”fill in the blank“.

just say no :)

Just say no 🙂

Deliberate Practice, Learning and Expertise

I’m back from some vacation where I read Malcolm Gladwell’s Outliers on the beach at our cottage (along with some very funny David Sedaris).

Even if you haven’t read Outliers yet you probably know that it sets out to dispel myths that intelligence or innate ability are the primary predictors of success.  Instead,  Gladwell summarizes research and provides examples to show that it is hours and hours of practice (10,000 to be exact) and a “practical intelligence” (similar in concept to emotional intelligence) acquired through experience that are the real determinants of success.

Gladwell covers similar territory (and draws on the same research) as Geoff Colvin’s Talent is Overrated: What Really Separates world Class Performers from Everybody Else, another excellent book that elaborates on an article Colvin wrote for Fortune magazine a few years ago: “What it Takes To Be Great”.

Both books debunk the assumption that “gifted” skill and great performance comes from innate talent, personal traits or hard wired competencies and ability.  The research Galdwell and Colvin draw on is impressive.  Both point to the extensive work of K. Anders Ericsson at Florida State University.  Ericsson has conducted years of rock solid research on the role of “deliberate practice” in the acquisition of expert performance.  If you like to seek out source research as I do, then you’ll enjoy Ericsson’s (and others) impressive work that has been collected in the Cambridge Handbook of Expertise and Expert Performance. Here is an earlier (and less hefty) review on some of the same research: “Deliberate practice” in the acquisition of expert performance.

At the core of these works is the concept of “deliberate practice” over longs periods of time (up to ten years).  While impossible to boil down the theory into a few points, here it is…uh…boiled down into a few points.   Highly skilled performance in all aspects of life and work can be developed by the rough equivalent of 10,000  hours (10 years or so) of increasing specific, targeted and mindful practice in a domain of expertise. The practice must be:

  • Specific & technique-oriented
  • Self regulated
  • Involve high-repetition
  • Paired with immediate feedback on results
  • Isn’t necessarily “fun”, (in fact can be grueling hard work)

“Deliberate practice is activity designed specifically to improve performance, often with a teacher’s help; it can be repeated a lot; feedback on results is continuously available; it’s highly demanding mentally, whether the activity is purely intellectual, such as chess or business-related activities, or heavily physical, such as sports; and it isn’t much fun.
From: Talent Is Overrated: What Really Separates World-Class Performers from Everybody Else .

Where Gladwell and Colvin focus on how an individual (you!) can use deliberate practice to improve and achieve the success you want,  Learning Professionals should be thinking about how to use the ideas to help others develop and grow the expertise needed by the organizations we support.  Ericsson has something to say here as well, having recently published a new book on how to design learning environments to develop and measure expertise– Development of Professional Expertise: Toward Measurement of Expert Performance and Design of Optimal Learning Environments.  In a time when learning/instructional design has become generalized and de-professionalized to the point of non-existence, it’s refreshing to see a serious treatment that moves the profession forward.

Using “Deliberate Practice” to Improve Workplace Performance

Here are 10 ideas that just scratch the surface on how Learning Professionals can use “deliberate practice” to improve workplace skill and performance.

  1. Move from “mastery learning” to designing practice with feedback over longer periods of time (from learning events to a learning process). Deliberate Practice differs from the concept of ‘Mastery Learning” at the heart of much instructional design.  Mastery learning assumes a skill is perfected (or at least brought to a defined standard) in a fairly short period of time often within the scope of a single course. The complex professional skills of modern knowledge workers and managers demand a stronger focus on long term practice and feedback and building learning around long term objectives.
  2. Develop the person. Time, practice and individualized feedback imply a long term focus on individuals rather than on jobs or roles.
  3. Informal learning efforts like action learning, coaching and are cognitive apprenticeships are critical but they must be focused on practice and immediate feedback and extend over long periods of time.
  4. Relevant, frequent and varied practice must be the dominant and most important element in all formal training programs.
  5. Practice opportunities must extend far beyond initial training programs, to allow people to hone their skills through experimentation with immediate feedback.
  6. Create practice sandboxes and simulation centres for key organizational skills where people can practice their skills and experience immediate feedback in safe environment.
  7. Design visual feedback directly into jobs so professional can immediately see the results of their work.  In this way working IS deliberate practice.
  8. Turn training events into the first step of a learning journey that will continue to provide opportunities to practice and refine skills throughout a career.
  9. Identify the interests and strengths of people nurture them through opportunities for deliberate practice. Provide resources and support that encourage early effort and achievement.
  10. Ensure social media environments provide opportunities for coaching and mindful reflection on performance.