Session Details

810: Mastering Skill Assessment: Adding Rigor to Your Evaluation

November 6, 2026 10:00 am - 11:00 am

Research on skill assessment highlights several problems, such as overreliance on learner satisfaction and knowledge checks or limited focus on soft skills, that can limit the effectiveness and accuracy of these evaluations. These issues span both the assessment process itself and the broader implications for learning and skill development. At the same time, some organizations are beginning to use AI tools to support test development and skills assessment design. While these tools can increase speed and generate useful starting points, their limitations, such as accuracy, bias, validity, and alignment to real-world performance, still need to be explored to ensure assessments remain rigorous and evidence-based.

In this session, you'll discover how to plan a training evaluation that helps create a more effective learning design, focusing on skills that align with industry needs and can be applied in real-world situations. By the end of the session, you'll have a clear plan to enhance your knowledge checks and create skill checklists or raters, including lessons from experimenting with AI and prompt engineering in test development.

In this session, you will learn:

  • How to assess both technical and soft skills accurately, ensuring a comprehensive evaluation of learners' abilities
  • How to plan a rigorous evaluation based on the impact and desired outcomes
  • How to use evaluation metrics to ensure the practical application of skills in real-world contexts beyond the training
  • How AI can be used to support assessment development, and what limitations must be considered to protect rigor and validity

This session requires no specific tools or technology, other than AI without getting into its technical background, is covered in the session.

Track: Data & MeasurementLevel: BeginnerFormat: Standard Session (60 Min.)