Integrating Technology with the Kirkpatrick Model for Enhanced Training Evaluation

Author: Wendy Kayser Kirkpatrick 

 

Are you still wondering whether additional technology is the right solution for evaluating your training programs? The answer is more complicated than we might like. But the fact is this: whether you’re a tech enthusiast or have a more complicated relationship with it, it’s a reality of today’s work environment. This guide will show you ways to boost your training evaluation through smart tech tools. Plus, we’ll highlight when it’s better to rely on the personal touch of human interaction.

The Kirkpatrick Model is the most widely used method for evaluating training effectiveness. Its elegantly simple four levels can measure any training, program, mission, or goal. When planning a program, start with Level 4 Results and work your way down. Make sure to design your evaluation plan alongside the program itself for the best results.

The Kirkpatrick Model

Level 4: Results: The degree to which targeted outcomes occur as a result of the training and the support and accountability package

Level 3: Behavior: The degree to which participants apply what they learned during training when they are back on the job

Level 2: Learning: The degree to which participants acquire the intended knowledge, skills, attitude, confidence, and commitment based on their participation in the training

Level 1: Reaction: The degree to which participants find the training favorable, engaging, and relevant to their jobs

Evaluating Program Results

Focus on defining and evaluating training program results only for your most important initiatives. For instance, develop a four-level evaluation plan for your onboarding program, leadership development initiative, and the launch of a key new product. Short, stand-alone programs and modules don’t need this level of detail unless they’re part of a bigger initiative.

Most organizations have a good idea of the desired outcomes and usually have systems to measure them. For example, companies have accounting reports that show sales by product line, costs by categories, and overall profitability. Typically, it’s not the training department’s job to gather this data. Instead, you’ll probably need to ask for permission to access it or get added to a distribution list.

Take a look at how many of these metrics relate to the program you want to evaluate and use them to your advantage. In larger organizations, technology has probably done most of the work for you. If you have a list of other outcomes, think about whether they might fit better at a lower Kirkpatrick level.

To complement the numeric data from company systems, consider setting up a way to gather stories and anecdotes that bring the data to life. This can be as simple as sending individual or group emails to training graduates asking them how they achieved their results. You can make it more personal with phone calls.

Evaluating Level 3 Behavior

Level 3 is the most important Kirkpatrick level. If you focus on doing well at this level, your program will be successful. This is where you should invest your resources, both human and technological. Start by clearly defining what needs to be done on the job to achieve the desired outcomes. For example, after a sales training program, you may want the salespeople to create and follow a weekly call schedule, use a successful sales call outline, and follow up on all incoming leads within one business day.

 

Use technology to create a support and accountability system for the salespeople. This support system could include:

➡️ Automated reminders to help them create a weekly call schedule

➡️ Occasional microlearning modules covering parts of the successful sales call outline

➡️ Weekly video conference calls with the entire sales team to share success stories and brainstorm solutions to challenges

The support and accountability systems can work together seamlessly, so don’t worry about keeping them separate if one system can naturally do both. Just make sure you’ve sufficiently addressed both support and accountability in your key initiatives. Accountability might feel daunting for some, but it’s important not to shy away from it.

Here are some tech tools to foster a culture of accountability:

➡️ A portal where salespeople can submit their weekly call schedules and call reports after each call

➡️ Location tracking systems for outside sales reps

➡️ A customer relationship management (CRM) platform to manage incoming leads and response times

➡️ Weekly video meetings between each salesperson and the sales manager to review all call reports and sales stats

Creating a support and accountability system is crucial. Simply training people and hoping for the best typically results in success less than one-third of the time. Active support systems are essential on the job to help individuals, and holding people accountable demonstrates what truly matters.

Technology plays a vital role in your Level 3 strategy, but this is where it’s equally important to invest in human resources. The sales manager should monitor sales reps regularly to make sure they’ve created a weekly plan, are making their sales calls, and responding to leads promptly. Technology provides visibility, enabling managers to spot and address issues quickly. This is why successful implementation at Level 3 leads to positive outcomes at Level 4. Instead of just measuring and reporting what happened, effective systems empower managers to keep performance on track for achieving the desired results. This way, they can identify and correct any issues that might impact initiative outcomes early on.

 

Evaluating Level 2 Learning and Level 1 Reaction

Levels 2 and 1 are places to save resources, not spend them. During and after training, confirm that individuals understand and can perform the critical behaviors needed for success on the job. Gather the metrics necessary to demonstrate this. If there are other data points important to the training department or stakeholders, gather those too. However, aim to minimize spending on evaluating Levels 2 and 1, even if you’re using technology.

A cost-effective technique that can yield robust data, if needed, is to focus on formative evaluation methods, which are conducted during the training itself. For instructor-led training, this is straightforward. Instructors can ask participants about their experiences and assess their understanding of activities throughout the program. They can manually track this information and include it in a post-program summary or use an app during the session for participants to enter their responses. Using an app allows for quick data collection from all participants, saving time during and after the program when reporting the data.

For e-learning programs, include questions throughout the sessions to capture participant sentiment and knowledge levels. This minimizes the need for lengthy post-program surveys. For existing programs, review the activities, and identify which ones can provide useful data when you generate a report.

An electronic post-program survey is the most widely used evaluation method. It’s fast, easy, and cost-effective to create and disseminate, and often allows for automated tabulation. However, it’s important to be cautious as surveys can be overused. Before deploying a survey, consider whether each question will provide valuable data, and whether it justifies the time investment to complete it, and analyze the responses. Only include questions that are directly relevant to the program’s goals.

Get creative with your surveys. Go beyond collecting Levels 1 and 2 data. Ask employees about their action plans following the training, and what outcomes they expect. Include open-ended questions and offer participants the option to share more later if they’re interested. This approach saves not only time and resources but also encourages eager training graduates to volunteer their stories, which will be invaluable for substantiating your final report data.

Surveys are also great for pinpointing areas that need improvement. If a specific course or item receives low scores and negative feedback, this is your cue to allocate some human resources. You might also consider organizing a focus group or reaching out to past participants for more detailed insights through personal calls.

Add a Human Touch

Using technology to gather training evaluation data is essential and powerful. With the constant evolution of available tools, aim to humanize the technology you use to enhance the user experience. The best technology should feel like a friendly conversation. Strive for that level of engagement. The next time you’re surveyed by companies you support, take note of what you appreciate and what you don’t. Keep a record of the surveys that made you feel valued and understood, and maybe even made you laugh. For instance, which of these messages resonates with you more?

 

Use a warm, friendly touch in your technology to bolster your initiative and its success. However, be careful not to mimic a human being too closely. We’ve all heard someone get frustrated with an automated voice once they realize it’s not a real person. Don’t be that company. Make it clear when technology is speeding up responses or handling broader tasks efficiently.

 

Lastly, remember that technology is the means and not the end. While new tools and apps can be exciting, don’t lose sight of their purpose. The goal is always the same: prepare your learners to excel in their job skills and maximize organizational results. Let this guide your decisions on where to invest both in technology and human resources.

 

Learn More: