Matthew Richter posts daily comments in LinkedIn—well almost daily. You can follow him and join the conversation by going to http://linkedin.com/in/matthew-richter-0738b84. For the benefit of our readers, we decide to compile and reprint some of his provocative pieces from the past. Let us know what you think.
Challenge the Validity
We trainers profess a lot. We teach models and theories via finely tuned presentations. But, do we challenge the validity and efficacy of those models? Where do they come from and did that source present them accurately? The best trainers are skilled in statistical methods, research, and critical thinking (and they have many other abilities, too). The best trainers know the differences between types of validity. The best trainers do not adhere to models and practices that lack academic rigor. As trainers, it is our job to translate the useful academic ideas and make them practical for our participants. We should never forget to test and challenge the validity of the concepts we teach.
Models and Theories
Trainers, coaches, and instructional designers are a weird bunch. We like models. We like theories. We like ideas and content. We like details, and of course, we love the process of what we do. We forget that our participants want to develop skills that solve problems. They want to do something better. They want tools that are simple to apply and readily help them do their jobs. We should use our models as background info, informing our designs toward our goals. But we should keep the theory to a minimum, unless otherwise asked, when in front of participants.
The more a client tells me they want the exact same experience for all participants during a training program, the more I question what they want to achieve. I am all in favor that the outcomes of a program should be consistent, but the journey toward those expectations should vary based on the people in the room (virtual, live, or otherwise). For hardcore, technical procedural training the processes are equally important. But even then, the variance comes from how participants learn. No two workshops should ever be delivered the same way if we want the learning to be effective.