Two Fundamental Questions L&D Stakeholders Should Answer to Improve Learning
By Dr. Eric A. Surface & Dr. Kurt Kraiger | Originally Published in Training Industry Magazine
Answers Without Relevant Questions Aren’t Useful
During recent conference presentations and webinars focused on analytics, big data and evaluation, we noticed audience members asking, “What questions should I be asking [and answering with evaluation data and analytics]?” Speakers typically answer these questions one of two ways: either by recommending collecting specific types or “levels” of data (e.g., business impact), as if all relevant questions for all learning and development (L&D) stakeholders (learners, co-workers, managers, C-suite executives, etc.) should be immediately identified and addressed; or by recommending collecting and tagging as much data as possible so the data analysts figure it out, as if the important questions will only emerge from analyzing all the dataafter the fact.
“The answers you have are only as good as the questions you’ve asked.” – Rebecca Trotter
Both perspectives can be made to work, but neither completely satisfies. Both advise by emphasizing answers (i.e., collect these specific answers or collect all the possible answers) from overactively asking questions to guide and tailor evaluation. Neither provides L&D stakeholders with much agency in the evaluation process. Neither identifies how L&D stakeholders can ask questions that, when answered through the evaluation process, provide relevant, actionable and timely insights they need to do their jobs well and have an impact.
We need a better approach to address the question. One that empowers all L&D stakeholders to engage in and tailor the evaluation process. One that empowers all L&D stakeholders to act on relevant data to make a difference. L&D evaluation and analytics should focus on building value with data. We need to ask meaningful questions to guide evaluation planning and practice so each stakeholder gets relevant, actionable insights, at the right time, to act and create value.We believe effective evaluation and value creation depend on asking and answering two fundamental types of questions. These questions align each stakeholder’s actions with their roles (i.e., their opportunities to impact the process and/or its outcomes) and with the organization’s strategy to develop the capabilities needed to execute its objectives, strategy and mission.
“SUCCESSFUL PEOPLE ASK BETTER QUESTIONS, AND AS A RESULT, THEY GET BETTER ANSWERS!” – TONY ROBBINS
Focus on Value by Asking Two Questions
In our 2017 Training Industry Magazine article, “Beyond levels: Building value using learning and development data,” we focus on how L&D professionals can establish value by first asking “what is of value to whom,” not “what level to measure.” This acknowledges that various L&D stakeholders have different roles and responsibilities, different opportunities to influence the L&D process and its outcomes, different subjective judgments of value, and different data needs. Value can only be created when stakeholders use relevant data to act and improve a process or outcome important to their role. Whatis measured should result from asking and answering questions relevant to each stakeholder’s role and area(s) of opportunity for impact in which the stakeholder has agency to act. We introduced two questions to focus evaluation on creating value from the perspective of any L&D stakeholder.
While different stakeholders contribute to the organization and L&D in different ways, they all can ask and answer the same two types of questions to guide evaluation practice, have impact and create value within their role in the L&D process:
- How well did I do?
- How can I do better?
The first question is an example of an effectiveness question. The second an example of an improvement or diagnostic question. Together these questions focus stakeholders’ efforts on the areas they impact.
Effectiveness questions focus on an individual, group, or program’s standing on an important metric. Answers to effectiveness questions allow stakeholders to decide if desired standing on the metric has been met and if “praise” or “change” (improvement) is needed. “Levels” are appealing because they can provide answers to effectiveness questions.
Did the trainer’s learners pass the certification exam? How well did the training transfer? What was the ROI for training? Effectiveness questions must match the stakeholder’s needs, and the answers provided must allow stakeholders to judge effectiveness. Importantly, answers to effectiveness questions do not provide guidance on how to improve. They only identify gaps between desired and actual outcomes, not solutions. For example, a training manager who learns that a high percentage of learners failed the certification exam (here, pass rate is a program effectiveness criterion) cannot diagnose and address the issue without additional data.
Improvement questions focus on identifying the factors that stakeholders can address to improve the outcome or close the gap identified by effectiveness questions. The aim is to find a lever to pull to “change” (improve) answers to effectiveness questions. Improvement questions are only asked if there is a need to improve. They can be general to prompt stakeholder reflection on what needs to change, or specific to provide data on diagnostics factor to guide changes or interventions.
For example, trainer instructional performance impacts trainee learning, and trainer performance can be improved through performance feedback and coaching. Often, diagnostic factors are not proactively measured and must be incorporated into the evaluation plan after an effectiveness issue is detected. Proactive organizations continuously monitor and improve their L&D functions by planning to measure the most likely diagnostic factors associated with learning and transfer.
Applying the Two Questions
- Ensure application aligns with your organization’s L&D strategy.
- Determine focus: on a specific need or on continuous monitoring and improvement.
- Decide the evaluation purpose, objective and scope within focus and strategy.
- Identify the relevant stakeholder group(s) and their roles and objectives.
- Identify the effectiveness questions and related improvement questions for each stakeholder group.
- Select the evaluation design to answer the questions.
- Determine what data are needed to answer the effectiveness question(s); what data are diagnostic of effectiveness.
- Determine if data exist already and/or need to be collected to answer the questions; if data do not exist, determine collection feasibility.
- Decide how insights will be generated (analysis) and provided to stakeholders.
- Decide if you have sufficient resources to implement the plan.
“Better” Questions are Stakeholder-Focused
To support value creation, every evaluation should be tailored to the purposes, objectives, roles and needs of one or more stakeholder groups. This tailoring results in relevant and actionable questions aligned with a stakeholder group’s opportunity and ability to impact the process and its outcomes. The two questions approach works because the effectivenessimprovement questions are customized to the stakeholder group’s context and information needs every time.
The two questions can be general or specific and can take different forms and perspectives, such as the trainer’s role to facilitate learning: How well did my learners perform on the practice certification exam? Which learners need additional preparation for the certification exam? How can I help the learners who need additional preparation?
Two stakeholders can share the same effectiveness/improvement questions but from different perspectives— the trainer questions above could be asked from a learner’s perspective. Multiple stakeholders can use the same data to address similar effectivenessimprovement questions. For example, different stakeholders have different scopes of responsibility—learners focus on themselves, trainers focus their learners, and program managers focus on all learners. The data are aggregated to address the questions as you move up the hierarchy from learner to CLO. The two questions apply to any training context, analysis level (individual, group, program, or organizational) and type of outcome (learning, transfer, performance, results, and value metrics).
By carefully thinking through each stakeholder group’s evaluation needs, the two questions can be crafted to drive effective evaluation practices that generate relevant data and lead stakeholders to use it to build value.
Two Types of Insights
In asking questions, we seek answers we hope will provide insights to inform our decisions and actions. There are effectiveness and improvement insights to match the questions. Effectiveness insights involve deciding whether the desired amount/level/value of the metric specified in the effectiveness question (e.g., certification exam pass rates) was sufficiently achieved or met. If not, related improvement questions must be asked. Related improvement insights identify the drivers (levers) of the effectiveness metric that can be modified to impact the effectiveness.
VALUE COMES FROM PEOPLE ACTING ON RELEVANT AND TIMELY DATA TO MAKE A DIFFERENCE.
An insight is an informed judgment or interpretation of the data that can be used to guide decisions or actions, made by placing the data in a context. The questions asked provide context but often more information is needed for a full interpretation. A stakeholder can reflect on the meaning of the data in their context based on experience, then gain insights. Or, the data can be compared to historical or normative data to create a context for interpretation.
For example, when interpreting the effectiveness of a class’s certification results, comparisons to results of other classes past and present can provide context to answer, “did they learn?” Comparisons to norms, standards and goals, as well as multiple sources (360 feedback) are common. The historic relationships between diagnostic factors and outcomes can guide improvement efforts and be used predictively to decide when intervention is needed.
Applying the Two Questions
The two questions can be applied to evaluations focused on a specific issue or need; on continuous monitoring and improvement; and of any scope from narrow (e.g., one learner) or broad (e.g., all programs). Establishing clear linkages among the strategy, focus, purpose, objective, scope, stakeholders, questions, evaluation design, and data collected leads to success. Only address questions for which a stakeholder group will receive the answers in a timeframe and format that allows for use to make better decisions or to improve the process or outcomes. Our experience demonstrates people participate in evaluation and view it positively when they believe the data are used to make changes that benefit them, their team members, or the organization.
Final Thoughts
When you start with relevant questions rather than levels, your evaluation practice is more targeted to your organization’s needs and will be more effective as it was designed to generate insights useful to your stakeholders in making better decisions and acting to improve the learning process, learning itself, or learning’s impact on performance and organizational outcomes. The real power of the two questions is in empowering stakeholders; value does not come from following “levels” or collecting data, it comes from people acting on relevant and timely data to make a difference.
Dr. Eric A. Surface is president and principal scientist at ALPS Solutions. He recently launched ALPS Insights to provide evaluation, analytics and insights via a new software platform, ALPS Ibex. Dr. Kurt Kraiger is a professor of psychology at Colorado State University. He is also a co-founder and principal psychologist for jobZology, a career development company. Email Eric and Kurt.
RECENT INSIGHTS
Dr. Eric Surface Speaks with Dr V about Evaluating DEI Training Programs
Today, Dr. V is joined by Dr. Eric Surface, CEO of ALPS Insights, with provides evidence-based solutions to improve the effectiveness, impact, and value of workplace learning and development (L&D activities. Dr. V and Eric discuss DEI training evaluation best...
Are we learning yet? Only your data knows- so you’d better pay attention to it.
In this episode Charles welcomes Eric Surface, fellow I/O Psychologist and founder/CEO of ALPS Insights, a training analytics company that provides tools and advice to help companies extract value from their learning programs via actionable insights based on data. A big thank you to Science 4-Hire and author, Charles Handler.
Impact Evaluation: From Employee Training to Leadership Development
SIOP Annual Event: Sat, April 25, 12:30PM-1:20PM [Cancelled due to COVID-19 policies]
Drawing on the combined experience of a diverse panel of learning and development experts, this session will examine and discuss current practices and future opportunities in impact evaluation for a wide range of interventions, from employee training to leadership development programs. Panelists will share insights to help build value using evaluation data.
Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice
TK2020 Event: Wed, February 5, 11:30AM – 12:00PM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.
Want More Value from Evaluation? AIM to Answer Two Questions
TK2020 Event: Thurs, February 6, 9:00AM – 10:00AM
While almost all learning is evaluated, less than half of organizations report that evaluation helps meet their learning and business goals. Data create no value. People acting on meaningful data within the L&D process create value. The Alignment and Impact Model (AIM) focuses evaluation on helping all stakeholders create value. AIM incorporates purpose, process, stakeholder roles, and two questions to guide evaluation design and focuses on maximizing learning, transfer, and impact. Examples demonstrate the fundamentals of AIM and how it can be implemented and used.
Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice
TK2020 Event: Thurs, February 6, 10:15AM-10:45AM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.
Let’s Connect.
Ready to learn how ALPS Insights can help your organization improve?