Heard of Blended Learning? What About Blended Evaluation?
As a training professional, you’ve likely heard of blended learning. While traditional training was one-dimensional, blended learning revolutionized our industry.
In much the same way, traditional evaluation is one-dimensional, provides partial truth and is generally ineffective. It typically comes only in written form through smile sheets for Level 1, testing for Level 2 and 90-day surveys for Level 3. No evaluation typically occurs at Level 4. Sadly, the methodology behind these evaluation strategies is based more on tradition than on purpose.
These deficiencies have created a need for a Blended Evaluation approach. We define blended evaluation as a methodology in which data are collected from multiple sources using multiple methods, in a hybrid fashion that considers two or more Kirkpatrick levels, for the purpose of monitoring, adjusting and reporting findings to maximize program participant performance and subsequent organizational results.
Firstly, blended evaluation is purposeful; it is based on information required to make good training decisions (usefulness) and provide the data required by stakeholders to fund it (credibility). Here are some tips for adding purpose and credibility to your evaluation forms. Blended evaluation is then deliberate in that it is based on purpose. This methodology is primarily for, but not exclusive to, training.
You can plan for blended, multi-purpose evaluation to occur during program execution. Begin with the question, “What evaluation needs exist?” Maybe you want to improve training programs and post-training performance support, enhance learning and performance, or demonstrate the value of the program to stakeholders. Consider these and other possibilities, and then be deliberate in choosing how to collect, analyze, use and report relevant quantitative and qualitative data. Be sure that you’re not using up all of your training evaluation resources on Levels 1 and 2.
Here are some examples to compare and contrast traditional and blended evaluation approaches.
Outdated Traditional Evaluation Example
Charlene works for healthcare company ABC; she is in charge of administering their program evaluations. They used a ‘tried and true’ smile sheet for attendees immediately after all programs, knowledge and competency testing at the end of each program, and 60- and 90-day post-course surveys to training graduates and their managers for all programs. Charlene gathers this electronic data and sends out monthly reports to her team and stakeholders.
New Kirkpatrick Blended Evaluation Plan Example
Christine works for healthcare company XYZ and has conducted some targeted interviews with stakeholders and line managers. She has determined that some of the training is dated. Worse yet, some of the patient-critical on-the-job behaviors are not being performed to standards. She is now in the midst of working with her L&D colleagues and key subject matter experts to make the formal training more relevant to current patient needs. She is also working with the same group to develop and implement a stronger performance support package in order to increase application, including roles and responsibilities for training, supervisors and the training graduates themselves.
Don’t Miss a Feature
Register with us to receive Kirkpatrick Quick Tips delivered to your inbox.