Beyond Levels: Building Value Using Learning and Development Data
BY DR. KURT KRAIGER & DR. ERIC A. SURFACE | Originally Published in Training Industry Magazine
For many learning and development (L&D) professionals, training evaluation practices remain mired in the muck. What ends up being evaluated hasn’t changed much over the past two or three decades. We almost universally measure whether trainees liked the training, most of us measure if they learned something, and beyond that, evaluation is a mix of “We’d like to do that” and, “We’re not sure of what to make of the data we get.” Perhaps more critically, in one recent national survey, nearly two-thirds of L&D professionals did not see their learning evaluation efforts as effective in meeting their organization’s business goals.
It was probably mystery writer Rita Mae Brown (and not Albert Einstein) who first defined insanity as doing the same thing over and over, but expecting different results. But the logic still holds, when planning a training evaluation, L&D professionals typically think first about what to evaluate, and then think later (perhaps too late!) about how to use evaluation results to demonstrate the value of training. Training evaluation and learning analytics should be all about creating value throughout the L&D enterprise.
When consultants are approached to provide training evaluation help, the request is often framed as follows: “We are already evaluating Level X; we need help in evaluating Level X+1.” When asked why Level X+1 is valuable, the answer is invariably, “because we are already evaluating Level X!” It’s time to think beyond levels.
Got Value?
The goal of establishing value should always be the primary concern of L&D professionals. Just as good instructional designers should focus on L&D efforts that solve practical problems or address known competency gaps, evaluation should be planned based on knowledge of how to best measure and convey value to multiple organizational stakeholders.
THE GOAL OF ESTABLISHING VALUE SHOULD BE THE PRIMARY CONCERN OF L&D PROFESSIONALS.
L&D can be seen through the lens of transactions. Organizations (and learning enterprises) expend resources to acquire, develop and deliver learning content. Managers lose team member contributions when they send their staff to attend training. These days, many learners give up family time or sleep to “learn anytime, anywhere” online.
Learners, their supervisors and the organization benefit (gain resources) as a result of L&D activities. Evaluation data can quantify resources expended and gained. If we think of evaluation as making judgments about data – value judgments – then we must begin by asking not, “What to measure?” but, “What is of value to whom?” Value is created by using the data. Data remains data unless its acted upon in intentional ways. Data does not create value, it’s how you use it.
Eye of the Beholder
Answering “of value to whom?” introduces the concept of value from a human capital perspective. How does the development of individual skills and competencies relate to the resolution of performance problems? How does this development affect the potential of the organization for effectiveness and growth?
Addressing “of value to whom?” recognizes that subjective judgments of value differ among stakeholders in the organization. Too often, value is defined from only a single perspective in traditional evaluation frameworks. Perceptions of value may differ from the perspective of learners, trainers/ facilitators, instructional designers, training managers, CLOS and C-suite executives.
Each of these stakeholders in the L&D process play different roles, hold different objectives for L&D, and should expect different data tailored to them to make judgments about L&D value. All stakeholders require relevant, timely data to perform their roles effectively and create value or to improve outcomes and create more value. The data needed to support each role should be the focus of the evaluation, not measuring levels.
When we understand the ways in which stakeholders see their work and its contributions to organizational outcomes and the ways in which L&D affects those contributions, we can work backwards to design data collection to support the contributions of all stakeholders. The L&D professional is uniquely positioned to support and drive reflection on value added by multiple stakeholders.
Value has little to do with ROI. The value of something derives from its importance or worth in a transaction – what we expect to gain when we expend resources (money, time, effort, and so forth).
- Value must be understood from an individual perspective: You may be willing to pay $10,000 for a rare baseball card, but another fan may not. You may be willing to stand in the rain to watch your child play soccer, but another parent may not. Value can be defined in monetary and nonmonetary ways. Value can be created through L&D by decreasing time to effectiveness, increasing effectiveness, increasing efficiency, decreasing barriers, etc.
- Value can also be created by contributing to a value chain. For example, a newly learned skill may not have direct impact, but create value as it is honed or paired with new skills of other team members.
Power of Two
While different stakeholders contribute differently to the organization and interface with L&D in different ways, they all have the same two questions to answer related to how evaluation data can impact their role in the L&D process: “How well did I do? How can I do it better?” But, consider the types of questions that keep different stakeholders up at night, and how what data holds the most value differs by stakeholder.
- The learner is concerned about their immediate performance and/or their potential for growth. A sales trainee might want to know, “Do I know everything about our products and services? How could I change my presentation to close more sales?”
- The trainer knows that he or she helps trainees acquire the skills and knowledge to do their job. They need the data to determine how well they are doing and how to improve. They may wonder, “Are we giving our trainees enough practice and feedback to help them master job skills?”
- L&D managers or CLOs are responsible for obtaining resources and allocating them efficiently to maximize the performance and learning potential of the organization. Doing it well is about “moving the needle” as efficiently as possible. They may wonder about bang for the buck: “Could I have more impact on organizational processes if I move all my training online or mobile?”
In the C-suite, executives are responsible for having a vision, setting a strategy and ensuring organizational capabilities are aligned and moving in the right direction. It’s a myth that all (or even most!) top executives want to see a return on investment estimated for L&D. L&D impacts the people part of an organization’s capability. Thus, what is of most value to top executives is data to show that the organization is hiring and keeping the right talent, preparing them effectively to do their jobs, and preparing the current and next generation of leaders to succeed in their roles: executing on business objectives. Just as the typical sales trainee will not be interested in knowing that the organization has developed a leadership bench for the future, C-suite occupants probably don’t need to know that 95 percent of trainees found the training enjoyable or interesting.
Keep It Simple
There are many components of an effective evaluation system, including:
- Good, reliable measures of what is valued, or the levers that impact what is valued;
- A straightforward research design that isolates the impact of L&D;
- Clear ways to display data to enable informed decision-making; and
- Ongoing feedback loops that the evaluation system as designed is generating data, affecting decisions, adding value and moving the toward its goals.
Notice the adjectives we used: good, reliable, straightforward, clear and ongoing. We didn’t write: outstanding, beyond dispute, incredibly detailed, or magical and mystical. We find that when L&D professionals worry that they aren’t doing enough, or wonder if they could be doing more, they instinctively look first for “better” measures. We believe that when L&D professionals find that they are not having impact, their minds go to thinking about the “next” level (e.g., If this reaction and learning data is not convincing anyone in my organization, I guess I should be measuring behaviors).
Instead, we need to always return to the question of value and an understanding of our stakeholders. Remember, evaluation data only have value when the data are used by a stakeholder to create it. Who has asked for this evaluation? Who might be viewing our results? Who could do their jobs better if they knew the impact we were having?
By identifying stakeholders, by having conversations about how they add value to the organization, and by framing the questions of “How well did I do?” and “How can I do it better?” into a framework of value added by stakeholder, L&D professionals can conduct evaluations that build value through learning and development data. And “good, reliable, straightforward, clear and ongoing” might be just good enough.
Dr. Kurt Kraiger is a professor of psychology at Colorado State University. He is also a co-founder and principle psychologist for jobZology, a career development company. Dr. Eric A. Surface is the co-founder, president and principal scientist of ALPS Solutions. He is launching a new company, ALPS Insights, to provide learning and development analytics, insights and solutions via their SaaS software platform. Email Kurt and Eric.
RECENT INSIGHTS
Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice
TK2020 Event: Thurs, February 6, 10:15AM-10:45AM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.
Improving Instructor Impact on Learning with Analytics
Each of us can recall an instructor who made learning engaging, relevant and impactful, inspiring us to apply what we learned. Unfortunately, each of us can also recall an instructor who failed in one or more these areas. Instructors are force multipliers, reaching hundreds — if not thousands— of learners, impacting both their learning experience and motivation to transfer. So, how can we improve instructor impact on learning?
Two Fundamental Questions L&D Stakeholders Should Answer to Improve Learning
During recent conference presentations and webinars focused on analytics, big data and evaluation, we noticed audience members asking, “What questions should I be asking [and answering with evaluation data and analytics]?” Speakers typically answer these questions one of two ways: either by recommending collecting specific types or “levels” of data, as if all relevant questions for all learning and development stakeholders should be immediately identified and addressed; or by recommending collecting and tagging as much data as possible so the data analysts figure it out, as if the important questions will only emerge from analyzing all the data after the fact.
Qualitatively Different Measurement for Training Reactions
Training evaluation should provide insights not only about the effectiveness of training but also about how it can be improved for learners and organizations. In this context, the term “insights” implies a deep understanding of learning, the training process and its outcomes as well as evaluation procedures – designing, measuring, collecting, integrating and analyzing data from both a formative and summative perspective.
The Key To Quality Comments is Asking the Right Questions
Anyone who has participated in a training event is familiar with open-ended survey items like this one: “Please provide any additional comments you have about the training program you just completed.” After getting into the rhythm of clicking bubble after bubble in response to closed-ended survey items, many trainees come to a roadblock when provided with a blank box of space and asked to provide feedback in their own words.
Evaluating Instructional Behaviors for Improved Training Outcomes
ALPS Solutions engaged in a series of studies to understand why instructors were having such a large impact on student outcomes in the Special Operations Forces (SOF) community. If training is supposed to be a standardized experience, then the instructor to which a student is assigned should not cause a variable experience for students across classes. The goal of this research was to identify and reduce variability to create a more standardized and a positive experience.
Let’s Connect.
Ready to learn how ALPS Insights can help your organization improve?