For the past 30 years, we have been conducting projects that produce measurable business and professional results by improving human performance.

Training is one of the different interventions that we design, develop, and deliver. Even though we do not limit ourselves to training projects, we are mostly recognized for activities in this field.

The ADDIE Model (Analysis, Design, Development, Implementation, Evaluation) is one of the most recognized approaches to training development. Frequently, our colleagues and clients ask us, “Do you use the ADDIE model or some other model?”

Our answer is, “Yes”.

Because we don’t ever assume that it is a case of either A or B; it is always a case of A and B. When asked to clearly explain how our approach resembles the ADDIE model and how it differs from it, we say, “We use all the steps of ADDIE all at once.”

Ours is a concurrent approach.

We scramble the steps of the ADDIE model. The result: Performance improvement interventions that are produced faster and cheaper, and that produce better results. We have identified 16 ways in which our concurrent approach differs from ADDIE. Here are the top six. Thiagi has added some brief commentary for each one. Take a listen.

Click here to see the remaining ten ways.

The Four Components

Although ADDIE has five letters, these letters represent four components to us. This is because we are never able to draw the boundary line between design and development. So we treat design/development as a single component.

Before we explain how we mix up the four components, let’s explore each of them in terms of the inputs, activities, and outputs:

Analysis: Just in Time, Just Enough

Current State versus Ideal State ("what-could-be state")

We have a major phobia about analysis paralysis. To get rid of this dysfunctional fear, we conduct the minimum amount of analysis (sometimes lasting for a matter of minutes) and jump into the other activities as soon as possible.

We focus our initial analysis in identifying the gap: In terms of a performance problem, this gap is the difference between the actual state and the ideal state. In terms of a performance opportunity, this is the distance between that actual state and the “what-could-be” state. Once we have determined the gap, we identify root causes that prevent us from closing it. Then we select potential interventions.

Design/Development. Most of our additional analyses are conducted on a just-in-time basis with our design/development activities. The types of analysis depend on the type of intervention selected. For example, if training is the intervention of choice, we analyze the entry competencies of the participants. If we choose motivational system as the intervention, we analyze the participants’ preferred menu of reinforcing activities and their current levels of autonomy.

Implementation. We combine analysis activities with implementation activities. For example, we conduct a systems analysis to identify resources and constraints that will influence the impact of our intervention. We also analyze the multiple perspectives and hidden agendas of various stakeholders.

Evaluation. We treat evaluation as the other side of the analysis coin. We use the same types of data collection techniques in both of these components. Before we design suitable evaluation strategies and instruments, we once again analyze the characteristics of the participants, this time from an assessment point of view.

 

Combining Design/Development with Other Components

We already talked about the connection between analysis and the component of design/development.

Implementation. One of our mantras is to build airplanes while flying them. For example, we deliver training while designing it. We strive to get things done as close to the implementation context as possible. We involve typical participants and typical facilitators throughout our process as members of our design/development team.

Evaluation. All our evaluation activities immediately lead to re-design and re-delivery. We do not wait until the intervention package is completed before evaluating them. In the true spirit of continuous improvement, we test small chunks of developed materials and methods with our colleagues and resident guinea pigs and make on-the spot adjustments. We use a variety of evaluation approaches to continuously enhance our interventions. 

Implementation Is the Key

We already explored the linkages between implementation and the components of analysis and design/development.

Evaluation. Implementation is tightly integrated with final evaluation activities. We evaluate the intervention package in the actual context during implementation. The data from this authentic evaluation results in revisiting our analysis and re-designing materials and methods to more effectively achieve our goals.

Evaluation Permeates Everything

In our performance intervention process, evaluation is not a separate set of activities. We seamlessly integrate evaluation with every other component. Evaluation begins during analysis, plays a key role in design/development, and validates our activities during implementation.

Multiple Personality Syndrome?

In our performance improvement projects, we consistently make a single individual accountable for all the activities in order to present a reassuring linkage with the clients and the other stakeholders. We select (and train) our project leaders on the basis of their proven competency in all four components. This is a tough task but we want this leader to be able to perform all the activities—all at once.