Want More Value from Evaluation? AIM to Answer Two Questions
Thursday, February 6, 9:00AM – 10:00AM
While almost all learning is evaluated, less than half of organizations report that evaluation helps meet their learning and business goals. Data create no value. People acting on meaningful data within the L&D process create value. The Alignment and Impact Model (AIM) focuses evaluation on helping all stakeholders create value. AIM incorporates purpose, process, stakeholder roles, and two questions to guide evaluation design and focuses on maximizing learning, transfer, and impact. Examples demonstrate the fundamentals of AIM and how it can be implemented and used.
Applications on the Job:
- Learn the fundamentals of AIM and discover whether your L&D evaluation practice is creating value for your organization.
- Apply the AIM model through key stakeholder analysis and two fundamental questions to improve L&D evaluation.
- Leverage technology to implement AIM, streamline L&D evaluation, and save your organization time and money.
Dr. Eric A. Surface
Founder, CEO and Principal Scientist
Dr. Eric A. Surface has a passion for helping clients use evidence-based insights to improve learning, capability, performance and impact in their organizations. Eric leverages science, measurement and analysis methods, technology and over two decades of experience to help individuals, teams and organizations excel. Eric has led many consulting engagements for corporate, government, non-profit and military clients focused on analyzing data and delivering insights to help clients achieve their goals and accomplish their missions. To help more people improve their impact and create value in their organization, Eric envisioned ALPS IbexTM as an expert system to help all stakeholders engage with, gain insights from and act on learning, performance and business data within their role. He has spoken at numerous conferences and co-authored articles appearing in peer-reviewed journals, such as Journal of Applied Psychology, Journal of Management, Human Performance, Human Resources Management Review and Journal of Business and Psychology, and in practitioner publications, such as Training Industry Magazine. Eric is an award-winning industrial/organizational psychologist, an honorary member of US Army Special Operations Forces, and a Fellow of the American Psychological Association, the Society for Industrial/Organizational Psychology and the Society for Military Psychology. He is the current president of the Society for Military Psychology. He earned his PhD in Industrial/Organizational (I/O) Psychology at North Carolina State University, his MA in I/O Psychology at East Carolina University, and his BA in Psychology at Wake Forest University. He was an Army Research Institute for the Social and Behavioral Sciences (ARI) Consortium Research Fellow and Post-Doctoral Fellow.
During recent conference presentations and webinars focused on analytics, big data and evaluation, we noticed audience members asking, “What questions should I be asking [and answering with evaluation data and analytics]?” Speakers typically answer these questions one of two ways: either by recommending collecting specific types or “levels” of data, as if all relevant questions for all learning and development stakeholders should be immediately identified and addressed; or by recommending collecting and tagging as much data as possible so the data analysts figure it out, as if the important questions will only emerge from analyzing all the data after the fact.
Training evaluation should provide insights not only about the effectiveness of training but also about how it can be improved for learners and organizations. In this context, the term “insights” implies a deep understanding of learning, the training process and its outcomes as well as evaluation procedures – designing, measuring, collecting, integrating and analyzing data from both a formative and summative perspective.
For many learning and development (L&D) professionals, training evaluation practices remain mired in the muck. What ends up being evaluated hasn’t changed much over the past two or three decades. We almost universally measure whether trainees liked the training, most of us measure if they learned something, and beyond that, evaluation is a mix of “We’d like to do that” and, “We’re not sure of what to make of the data we get.” Perhaps more critically, in one recent national survey, nearly two-thirds of L&D professionals did not see their learning evaluation efforts as effective in meeting their organization’s business goals.
Anyone who has participated in a training event is familiar with open-ended survey items like this one: “Please provide any additional comments you have about the training program you just completed.” After getting into the rhythm of clicking bubble after bubble in response to closed-ended survey items, many trainees come to a roadblock when provided with a blank box of space and asked to provide feedback in their own words.
ALPS Solutions engaged in a series of studies to understand why instructors were having such a large impact on student outcomes in the Special Operations Forces (SOF) community. If training is supposed to be a standardized experience, then the instructor to which a student is assigned should not cause a variable experience for students across classes. The goal of this research was to identify and reduce variability to create a more standardized and a positive experience.
ALPS Solutions has worked closely with training programs within the Special Operations Forces (SOF) community to evaluate training effectiveness and identify areas where interventions by program administrators and instructors can have a positive impact on learner- and class-level outcomes.
Ready to learn how ALPS Insights can help your organization improve?