Authentic Assessment in AI-Infused Learning Environments: An Evidence-Centered Design Framework and Rubric Toolkit for Academic Integrity

Authors

  • Arben Hoxha University of Tirana Author
  • Elira Leka Aleksandër Moisiu University of Durrës Author

Keywords:

Authentic Assessment; Generative AI; Academic Integrity; Evidence-Centered Design; Constructive Alignment

Abstract

Generative AI tools have destabilized traditional take-home assessment by lowering the cost of producing fluent text, code, and problem solutions. Institutional responses often oscillate between prohibition and permissive use, yet both approaches fail when assessment design does not specify what counts as credible evidence of learning. This article proposes a practical framework for assessment integrity in AI-infused learning environments that shifts attention from detection to design. Using an integrative synthesis of research on authentic assessment, constructive alignment, academic integrity, and emerging guidance on generative AI, we develop an evidence-centered assessment design workflow and a rubric toolkit that make acceptable AI use transparent while preserving the core purpose of assessment: eliciting student thinking. The framework operationalizes five design decisions: defining outcome-relevant evidence, setting AI-use boundary conditions, embedding process traces and checkpoints, using rubric criteria that reward disclosure and reasoning, and adding verification moments such as oral defense or short in-class microtasks. We present a model (Figure 1) and a rubric matrix (Table 1) that can be adapted across disciplines for essays, projects, laboratory reports, and portfolios. The contribution is an implementation-ready package that reduces incentives for misuse, supports equity through clear rules and scaffolding, and enables program-level quality assurance through calibration. We conclude with implications for policy, staff development, and future research on learning outcomes in hybrid human–AI work practices.

Downloads

Published

2026-03-03