Session Details
405: Clone Your Best Evaluator: Build AI Rubrics That Actually Capture Quality
Your best evaluator can assess quality in 30 seconds. Can you explain how they do it? Most can't... and that's why AI assessment fails. Here's the problem: L&D teams need to scale quality assessment, but expert judgment is locked in people's heads. Traditional rubrics don't capture it. Generic AI tools can't see it. And stakeholders don't trust it when automation takes over.
Through real implementation experience—including showing what almost failed—this session shows exactly how to build rubrics that work for both AI interpretation and skeptical stakeholders. Building around your actual assessment challenges, you will construct rubrics for competencies you need to evaluate, stress-test them with peers, and validate them against the interpretation failures that kill AI credibility. You'll also learn principles like: structural requirements enabling consistent AI scoring, language patterns reducing ambiguity, validation techniques catching problems before they reach learners, and interview methods extracting expertise from SMEs who cannot articulate what they know. Walk out with a completed three-level rubric for your context, validation checklists, SME interview guides, and governance frameworks—all proven in high-stakes certification environments where credential value directly impacts hiring decisions.
In this session, you will:
- Design a complete, validated 3-level rubric for your assessment challenge, ready to test with AI or human evaluators
- Master the "expert interview method," extracting implicit criteria from SMEs who evaluate quality intuitively but cannot explain their process
- Get the validation checklist identifying AI interpretation failures before they undermine stakeholder confidence in your credentials
- Learn the specific language patterns and structural requirements to make rubrics work for both AI systems and skeptical business leaders
- Take home a governance framework that maintains assessment credibility while enabling rapid scale
Participants should be familiar with basic assessment concepts (formative vs summative, rubric vs checklist) and aware that AI can be used in learning/assessment contexts. No prior experience building rubrics or implementing AI assessment systems required. Technical AI knowledge not needed—this session focuses on assessment design principles that enable AI, not coding or platform configuration.