Overview
- A Deakin University paper published in Assessment & Evaluation in Higher Education argues that generative AI has fundamentally reshaped assessment in ways that resist a universal solution.
- Interviews with 20 unit chairs at a large Australian university in late 2024 described confusion, heavy workloads, and no reliable path to AI‑proof exams.
- Educators reported difficult trade-offs, noting that oral or handwritten in‑person assessments can be more AI‑resistant yet scale poorly, strain staff time, and risk disadvantaging some students.
- The authors urge universities to stop chasing a silver bullet and to authorize localized, iterative assessment redesign that openly balances competing priorities.
- Separate reporting highlights unreliable AI‑detection tools and data‑privacy concerns, raising the risk of false accusations and escalating an arms race with generative tools.