Two Fundamental Questions L&D Stakeholders Should Answer to Improve Learning
By Dr. Eric A. Surface & Dr. Kurt Kraiger | Originally Published in Training Industry Magazine
Answers Without Relevant Questions Aren’t Useful
During recent conference presentations and webinars focused on analytics, big data and evaluation, we noticed audience members asking, “What questions should I be asking [and answering with evaluation data and analytics]?” Speakers typically answer these questions one of two ways: either by recommending collecting specific types or “levels” of data (e.g., business impact), as if all relevant questions for all learning and development (L&D) stakeholders (learners, co-workers, managers, C-suite executives, etc.) should be immediately identified and addressed; or by recommending collecting and tagging as much data as possible so the data analysts figure it out, as if the important questions will only emerge from analyzing all the dataafter the fact.
“The answers you have are only as good as the questions you’ve asked.” – Rebecca Trotter
Both perspectives can be made to work, but neither completely satisfies. Both advise by emphasizing answers (i.e., collect these specific answers or collect all the possible answers) from overactively asking questions to guide and tailor evaluation. Neither provides L&D stakeholders with much agency in the evaluation process. Neither identifies how L&D stakeholders can ask questions that, when answered through the evaluation process, provide relevant, actionable and timely insights they need to do their jobs well and have an impact.
We need a better approach to address the question. One that empowers all L&D stakeholders to engage in and tailor the evaluation process. One that empowers all L&D stakeholders to act on relevant data to make a difference. L&D evaluation and analytics should focus on building value with data. We need to ask meaningful questions to guide evaluation planning and practice so each stakeholder gets relevant, actionable insights, at the right time, to act and create value.We believe effective evaluation and value creation depend on asking and answering two fundamental types of questions. These questions align each stakeholder’s actions with their roles (i.e., their opportunities to impact the process and/or its outcomes) and with the organization’s strategy to develop the capabilities needed to execute its objectives, strategy and mission.
“SUCCESSFUL PEOPLE ASK BETTER QUESTIONS, AND AS A RESULT, THEY GET BETTER ANSWERS!” – TONY ROBBINS
Focus on Value by Asking Two Questions
In our 2017 Training Industry Magazine article, “Beyond levels: Building value using learning and development data,” we focus on how L&D professionals can establish value by first asking “what is of value to whom,” not “what level to measure.” This acknowledges that various L&D stakeholders have different roles and responsibilities, different opportunities to influence the L&D process and its outcomes, different subjective judgments of value, and different data needs. Value can only be created when stakeholders use relevant data to act and improve a process or outcome important to their role. Whatis measured should result from asking and answering questions relevant to each stakeholder’s role and area(s) of opportunity for impact in which the stakeholder has agency to act. We introduced two questions to focus evaluation on creating value from the perspective of any L&D stakeholder.
While different stakeholders contribute to the organization and L&D in different ways, they all can ask and answer the same two types of questions to guide evaluation practice, have impact and create value within their role in the L&D process:
- How well did I do?
- How can I do better?
The first question is an example of an effectiveness question. The second an example of an improvement or diagnostic question. Together these questions focus stakeholders’ efforts on the areas they impact.
Effectiveness questions focus on an individual, group, or program’s standing on an important metric. Answers to effectiveness questions allow stakeholders to decide if desired standing on the metric has been met and if “praise” or “change” (improvement) is needed. “Levels” are appealing because they can provide answers to effectiveness questions.
Did the trainer’s learners pass the certification exam? How well did the training transfer? What was the ROI for training? Effectiveness questions must match the stakeholder’s needs, and the answers provided must allow stakeholders to judge effectiveness. Importantly, answers to effectiveness questions do not provide guidance on how to improve. They only identify gaps between desired and actual outcomes, not solutions. For example, a training manager who learns that a high percentage of learners failed the certification exam (here, pass rate is a program effectiveness criterion) cannot diagnose and address the issue without additional data.
Improvement questions focus on identifying the factors that stakeholders can address to improve the outcome or close the gap identified by effectiveness questions. The aim is to find a lever to pull to “change” (improve) answers to effectiveness questions. Improvement questions are only asked if there is a need to improve. They can be general to prompt stakeholder reflection on what needs to change, or specific to provide data on diagnostics factor to guide changes or interventions.
For example, trainer instructional performance impacts trainee learning, and trainer performance can be improved through performance feedback and coaching. Often, diagnostic factors are not proactively measured and must be incorporated into the evaluation plan after an effectiveness issue is detected. Proactive organizations continuously monitor and improve their L&D functions by planning to measure the most likely diagnostic factors associated with learning and transfer.
Applying the Two Questions
- Ensure application aligns with your organization’s L&D strategy.
- Determine focus: on a specific need or on continuous monitoring and improvement.
- Decide the evaluation purpose, objective and scope within focus and strategy.
- Identify the relevant stakeholder group(s) and their roles and objectives.
- Identify the effectiveness questions and related improvement questions for each stakeholder group.
- Select the evaluation design to answer the questions.
- Determine what data are needed to answer the effectiveness question(s); what data are diagnostic of effectiveness.
- Determine if data exist already and/or need to be collected to answer the questions; if data do not exist, determine collection feasibility.
- Decide how insights will be generated (analysis) and provided to stakeholders.
- Decide if you have sufficient resources to implement the plan.
“Better” Questions are Stakeholder-Focused
To support value creation, every evaluation should be tailored to the purposes, objectives, roles and needs of one or more stakeholder groups. This tailoring results in relevant and actionable questions aligned with a stakeholder group’s opportunity and ability to impact the process and its outcomes. The two questions approach works because the effectivenessimprovement questions are customized to the stakeholder group’s context and information needs every time.
The two questions can be general or specific and can take different forms and perspectives, such as the trainer’s role to facilitate learning: How well did my learners perform on the practice certification exam? Which learners need additional preparation for the certification exam? How can I help the learners who need additional preparation?
Two stakeholders can share the same effectiveness/improvement questions but from different perspectives— the trainer questions above could be asked from a learner’s perspective. Multiple stakeholders can use the same data to address similar effectivenessimprovement questions. For example, different stakeholders have different scopes of responsibility—learners focus on themselves, trainers focus their learners, and program managers focus on all learners. The data are aggregated to address the questions as you move up the hierarchy from learner to CLO. The two questions apply to any training context, analysis level (individual, group, program, or organizational) and type of outcome (learning, transfer, performance, results, and value metrics).
By carefully thinking through each stakeholder group’s evaluation needs, the two questions can be crafted to drive effective evaluation practices that generate relevant data and lead stakeholders to use it to build value.
Two Types of Insights
In asking questions, we seek answers we hope will provide insights to inform our decisions and actions. There are effectiveness and improvement insights to match the questions. Effectiveness insights involve deciding whether the desired amount/level/value of the metric specified in the effectiveness question (e.g., certification exam pass rates) was sufficiently achieved or met. If not, related improvement questions must be asked. Related improvement insights identify the drivers (levers) of the effectiveness metric that can be modified to impact the effectiveness.
VALUE COMES FROM PEOPLE ACTING ON RELEVANT AND TIMELY DATA TO MAKE A DIFFERENCE.
An insight is an informed judgment or interpretation of the data that can be used to guide decisions or actions, made by placing the data in a context. The questions asked provide context but often more information is needed for a full interpretation. A stakeholder can reflect on the meaning of the data in their context based on experience, then gain insights. Or, the data can be compared to historical or normative data to create a context for interpretation.
For example, when interpreting the effectiveness of a class’s certification results, comparisons to results of other classes past and present can provide context to answer, “did they learn?” Comparisons to norms, standards and goals, as well as multiple sources (360 feedback) are common. The historic relationships between diagnostic factors and outcomes can guide improvement efforts and be used predictively to decide when intervention is needed.
Applying the Two Questions
The two questions can be applied to evaluations focused on a specific issue or need; on continuous monitoring and improvement; and of any scope from narrow (e.g., one learner) or broad (e.g., all programs). Establishing clear linkages among the strategy, focus, purpose, objective, scope, stakeholders, questions, evaluation design, and data collected leads to success. Only address questions for which a stakeholder group will receive the answers in a timeframe and format that allows for use to make better decisions or to improve the process or outcomes. Our experience demonstrates people participate in evaluation and view it positively when they believe the data are used to make changes that benefit them, their team members, or the organization.
Final Thoughts
When you start with relevant questions rather than levels, your evaluation practice is more targeted to your organization’s needs and will be more effective as it was designed to generate insights useful to your stakeholders in making better decisions and acting to improve the learning process, learning itself, or learning’s impact on performance and organizational outcomes. The real power of the two questions is in empowering stakeholders; value does not come from following “levels” or collecting data, it comes from people acting on relevant and timely data to make a difference.
Dr. Eric A. Surface is president and principal scientist at ALPS Solutions. He recently launched ALPS Insights to provide evaluation, analytics and insights via a new software platform, ALPS Ibex. Dr. Kurt Kraiger is a professor of psychology at Colorado State University. He is also a co-founder and principal psychologist for jobZology, a career development company. Email Eric and Kurt.
RECENT INSIGHTS
Improving Instructor Impact on Learning with Analytics
Each of us can recall an instructor who made learning engaging, relevant and impactful, inspiring us to apply what we learned. Unfortunately, each of us can also recall an instructor who failed in one or more these areas. Instructors are force multipliers, reaching hundreds — if not thousands— of learners, impacting both their learning experience and motivation to transfer. So, how can we improve instructor impact on learning?
Qualitatively Different Measurement for Training Reactions
Training evaluation should provide insights not only about the effectiveness of training but also about how it can be improved for learners and organizations. In this context, the term “insights” implies a deep understanding of learning, the training process and its outcomes as well as evaluation procedures – designing, measuring, collecting, integrating and analyzing data from both a formative and summative perspective.
Beyond Levels: Building Value Using Learning and Development Data
For many learning and development (L&D) professionals, training evaluation practices remain mired in the muck. What ends up being evaluated hasn’t changed much over the past two or three decades. We almost universally measure whether trainees liked the training, most of us measure if they learned something, and beyond that, evaluation is a mix of “We’d like to do that” and, “We’re not sure of what to make of the data we get.” Perhaps more critically, in one recent national survey, nearly two-thirds of L&D professionals did not see their learning evaluation efforts as effective in meeting their organization’s business goals.
The Key To Quality Comments is Asking the Right Questions
Anyone who has participated in a training event is familiar with open-ended survey items like this one: “Please provide any additional comments you have about the training program you just completed.” After getting into the rhythm of clicking bubble after bubble in response to closed-ended survey items, many trainees come to a roadblock when provided with a blank box of space and asked to provide feedback in their own words.
Evaluating Instructional Behaviors for Improved Training Outcomes
ALPS Solutions engaged in a series of studies to understand why instructors were having such a large impact on student outcomes in the Special Operations Forces (SOF) community. If training is supposed to be a standardized experience, then the instructor to which a student is assigned should not cause a variable experience for students across classes. The goal of this research was to identify and reduce variability to create a more standardized and a positive experience.
Training Diagnostics to Improve Learner and Class Outcomes
ALPS Solutions has worked closely with training programs within the Special Operations Forces (SOF) community to evaluate training effectiveness and identify areas where interventions by program administrators and instructors can have a positive impact on learner- and class-level outcomes.
Let’s Connect.
Ready to learn how ALPS Insights can help your organization improve?