Qualitatively Different Measurement for Training Reactions

 

BY REANNA P. HARMAN, Ph.D  | Originally Published in Training Industry Magazine

Training evaluation should provide insights not only about the effectiveness of training but also about how it can be improved for learners and organizations. In this context, the term “insights” implies a deep understanding of learning, the training process and its outcomes as well as evaluation procedures – designing, measuring, collecting, integrating and analyzing data from both a formative and summative perspective.

A recent ATD study on evaluation practices found that 88 percent of organizations measure reactions. Although the study does not provide a specific percentage, it is safe to assume that most reactions data are quantitative and gathered from evaluation surveys and that a smaller percentage are gathered from qualitative methods. Quantitative measurement is a critical part of effective training evaluation, but there is untapped value in qualitative measurement – especially when it comes to developing a deep understanding of the phenomenon to generate actionable insights.

What Qualitative Methods Can Be Used for Training Evaluation?

Open-ended questions, focus groups and interviews are three qualitative methods that can be used to dig deeper and generate actionable insights. As training professionals, most of us are familiar with the inclusion of open-ended questions on training reaction surveys as a method for eliciting qualitative feedback. Comments from trainees add richness and context to the more structured and standardized closed-ended items, but they also are skipped more often and tend to elicit feedback from only the most dissatisfied trainees. And while the comments provide the opportunity to dig a little deeper into trainee reactions than the quantitative responses do, the feedback flows only one way (from trainee to evaluator), and there is no opportunity for follow-up or further probing to gain deeper understanding.

Focus groups and interviews provide an even better opportunity to engage in a dialogue with trainees about their experiences. These sessions can be conducted while training is still ongoing; at the end of training; or even retrospectively, after training is complete. These sessions are particularly useful if you are piloting a new program, if changes are being or have been made to training, if other training evaluation data suggest a deeper investigation may be beneficial, or if you are investigating learning transfer and the alignment of training with work requirements.

How to Maximize Your Investment in Qualitative Measurement

Qualitative measurement has a reputation of being costlier and more time-consuming – in terms of both data collection and analysis – when compared to quantitative measurement. But, when qualitative measurement is done well, the return is worth the investment. Here are some tips for adding or integrating qualitative tools into your training reactions measurement plan.

For collecting open-ended items on reactions surveys:

  • Write open-ended items that are specific and directly target information you are seeking rather than vague, catch-all items.
  • Consider changing open-ended items more frequently than closed-ended items to target hot-button issues or focus on specific trainee groups or situations (e.g., trainees participating in piloting a new course).

For conducting focus groups and interviews:

  • Develop and implement a sampling plan from the population of trainees to ensure participation from all relevant sub-groups (representation across different types of classes and to include trainees with different backgrounds).
  • Develop protocols and scripts with specific questions and follow-up probes.
  • Train moderators and scribes to execute the protocols and scripts as designed.
  • Record (via either audio or detailed notes) focus groups and interviews. Consider transcribing audio recordings for further analysis.
  • Create an atmosphere that supports open and honest feedback. Trainees must feel comfortable providing feedback, so it is often helpful for a third-party organization to conduct and analyze the feedback.

For analyzing and reporting qualitative data from open-ended items, focus groups or interviews:

  • Consider a combination of text analytics tools and human coding to develop an analysis strategy that aligns with your goals.
  • Report comments or feedback from focus groups and interviews along with quantitative survey items to show how the comments allow for further exploration of the quantitative trends. Highlight areas of consistency (i.e., agreement with the quantitative data) as well as edge cases where the qualitative feedback provides an alternative perspective.
  • Protect the privacy of respondents and follow best practices when reporting verbatim comments.

Adding qualitative methods to the training evaluation toolbox provides an opportunity for evaluators to gain a deeper understanding of the training experience, provides a rich source of data, and communicates to trainees that their input is valued and their voices are being heard.

Dr. Reanna Poncheri Harman is the vice president for practice at ALPS Insights, where she focuses on training evaluation, learning transfer, capability development and needs assessment, including using quantitative and qualitative measurement to help her clients gain insights and achieve their objectives.

RECENT INSIGHTS

Improving Instructor Impact on Learning with Analytics

Each of us can recall an instructor who made learning engaging, relevant and impactful, inspiring us to apply what we learned. Unfortunately, each of us can also recall an instructor who failed in one or more these areas. Instructors are force multipliers, reaching hundreds — if not thousands— of learners, impacting both their learning experience and motivation to transfer. So, how can we improve instructor impact on learning?

read more

Two Fundamental Questions L&D Stakeholders Should Answer to Improve Learning

During recent conference presentations and webinars focused on analytics, big data and evaluation, we noticed audience members asking, “What questions should I be asking [and answering with evaluation data and analytics]?” Speakers typically answer these questions one of two ways: either by recommending collecting specific types or “levels” of data, as if all relevant questions for all learning and development stakeholders should be immediately identified and addressed; or by recommending collecting and tagging as much data as possible so the data analysts figure it out, as if the important questions will only emerge from analyzing all the data after the fact.

read more

Beyond Levels: Building Value Using Learning and Development Data

For many learning and development (L&D) professionals, training evaluation practices remain mired in the muck. What ends up being evaluated hasn’t changed much over the past two or three decades. We almost universally measure whether trainees liked the training, most of us measure if they learned something, and beyond that, evaluation is a mix of “We’d like to do that” and, “We’re not sure of what to make of the data we get.” Perhaps more critically, in one recent national survey, nearly two-thirds of L&D professionals did not see their learning evaluation efforts as effective in meeting their organization’s business goals.

read more

The Key To Quality Comments is Asking the Right Questions

Anyone who has participated in a training event is familiar with open-ended survey items like this one: “Please provide any additional comments you have about the training program you just completed.” After getting into the rhythm of clicking bubble after bubble in response to closed-ended survey items, many trainees come to a roadblock when provided with a blank box of space and asked to provide feedback in their own words.

read more

Evaluating Instructional Behaviors for Improved Training Outcomes

ALPS Solutions engaged in a series of studies to understand why instructors were having such a large impact on student outcomes in the Special Operations Forces (SOF) community. If training is supposed to be a standardized experience, then the instructor to which a student is assigned should not cause a variable experience for students across classes. The goal of this research was to identify and reduce variability to create a more standardized and a positive experience.

read more

 

 

Let’s Connect.

Ready to learn how ALPS Insights can help your organization improve?

  • This field is for validation purposes and should be left unchanged.