<  Back

203 Content Knowledge or Deduction Skills: Creating Measurable Assessments

1:15 PM - 2:15 PM PT
Wednesday, October 26

Tracks: Data & Measurement

If someone completes a learning experience, can you say that they “know” the information? Many learning experiences use assessment techniques such as multiple-choice test questions or scenarios to assess the user’s content knowledge, however these types of assessment are often flawed. From multiple-choice test questions that someone can “guess” the correct answer to scenario questions that read more like a soap opera than anchored in what someone needs to answer, assessments often don’t measure what you think they do. Ineffective assessments can also lead to confusion for on-the-job performance, as well as with organizational leadership wanting more robust data to rate learning experience effectiveness.

In this session you’ll have the opportunity to explore sample assessments in the live game Multiple-Choice Mayhem, the only gameshow where poorly written questions help you win big. You will learn more about multiple-choice question-writing rules such as including plausible distractors. Additionally, you will learn to recognize common test-item pitfalls, like convergence. You will explore how smart test-takers use assessment flaws such as grammatical cues and distractor length to make educated guesses. Next, you’ll explore scenario-based assessments and common pitfalls such as too much information and the scenarios lacking application back to the organization and role of the individual. Finally, you’ll learn about how task analysis, a technique that breaks down performance criteria of a job, can be used to create measurable assessments. You’ll leave this session with a task analysis template that can be used to create multiple types of assessments, a 30/60/90 day plan to assess your organization’s learning experience assessments, and actionable steps to make them measurable.

In this session, you will learn:

  • How to connect assessments to stated learning objectives
  • About common assessment-writing pitfalls and how to avoid them
  • About the consequences of relying on poorly written assessment items
  • How to conduct a task analysis and its usefulness for writing measurable assessments

Technology discussed:

Poll Everywhere, Miro

Cara North

Founder and Chief Learning Consultant

The Learning Camel

Cara North is an award winning learning designer who has worked across learning and development for organizations such as Amazon, The Ohio State University, Silfex, and Circulo Health. She currently runs her own consulting firm, The Learning Camel. Cara also teaches as an adjunct professor at Boise State University in their Organizational Performance and Workplace Learning (OPWL) program.