The seismic shift brought about by generative AI (aka chatbots or language processors), such as ChatGPT, is challenging the fundamental pillars of written coursework assessment in higher education (HE). We’ve entered an era where the boundary between a student’s own work and sophisticated, AI-generated content is becoming practically impossible to detect [1].
This isn’t just about plagiarism – it’s an existential crisis for written coursework assignments (essays/ critical appraisal reports/ literature reviews) as a reliable measure of learning. If we can’t be sure a student produced the work, how can we assure learning outcomes have been achieved?
I believe the answer to this lies in moving beyond the written work and returning to a time-tested method: the oral defence or viva voce [2].
Questioning the integrity of written work
For centuries, the essay or report has been the standard of academic assessment. It tests research skills, critical thinking and analysis, organisation of thoughts & understanding of key concepts, and written communication skills.
Today, however, these skills, particularly the generation of coherent and analytical text, are precisely what generative AI excels at. The available tools can now produce researched-looking assignments in moments, mimicking human style and often evading detection software with ease [3].
This dilemma forces us to acknowledge a difficult truth: the written submission, alone, is rapidly losing its credibility as an assessment of individual student mastery of the course material. Students who rely heavily on AI bypass the critical thinking, drafting, and intellectual struggle that leads to genuine learning.
The era has also produced three types of learners: those who feel using AI constitute cheating, those who have fully embraced AI, and those who are not sure how to navigate this space ethically and responsibly. Overall, we risk simply credentialing students for their ability to prompt an AI tool effectively, not for their deep understanding of the subject.
Oral defence: the assessment of true understanding
I believe the solution to the AI world is to dramatically reweight our written coursework assessments in favour of the oral defence or the viva voce. This approach is not new – it remains the cornerstone of the PhD examination [2]. However, its application to undergraduate and taught postgraduate coursework is the necessary next step in the new world of HE.
The oral defence demands that a student explain, justify, and critically discuss the work they have submitted. The written work acts as a foundation, while the oral examination probes for true comprehension. An examiner can ask targeted, follow-up questions that no AI-ghost written assignment can prepare for, focusing on the nuances, methodologies, and independent judgements that underpin the submission.
For example, consider a coursework assessment where the written component is worth only 30% and the oral defence accounts for 70%.
The 30% written component assesses the research carried out, the structure, and the quality of the written communication work. Students have the freedom to use AI as a research assistant in producing this work or editing/ proofreading the work, but it must be declared and used ethically.
The 70% oral defence will assess the process and ownership of the work:
- Can the student answer challenging or applied questions about their findings?
- Can they defend the choices they made in their methodology/ analysis?
- Can they relate their conclusions to broader disciplinary concepts?
- Are they able to provide their own critical theoretical perspectives on the subject?
I believe it is through this direct intellectual exchange that a student’s true learning (their ability to think, articulate complex ideas, and own their research) is revealed. The oral defence is inherently AI-proof because it assesses the person, not just the text.
The academic workload challenge
While the pedagogical argument for this shift is compelling, the practical challenge is undeniable: academic workload! I am yet to meet an academic who is not navigating the workload pressure in HE. In fact we are witnessing a very high volume of academic misconduct hearings due to allegations of the use of generative AI in coursework.
Implementing a 70% oral defence for every student would significantly increase the required contact and marking time for academics, especially for large cohorts. A single, high-stakes oral defence may take far longer than grading a 2000-word essay.
Universities must recognise this reality. If we are to maintain academic integrity and deliver meaningful qualifications in the age of AI, this cannot be a simple reallocation of existing resources.
The transition requires institutional support: smaller class sizes, reassessment of academic teaching-to-admin ratios, and investment in more teaching staff. We need to prioritise the quality of assessment over the scalability of outdated methods.
Generative AI is here to stay. Rather than fighting an unwinnable war against detection [3], higher education must adapt by focusing our assessments on what machines cannot replicate: genuine, human, critical engagement. The oral defence is the key to safeguarding the value of a university degree in the AI era.
References
[1] Khalaf MA. Does attitude towards plagiarism predict aigiarism using ChatGPT? AI Ethics 2025;5:677–88. https://doi.org/10.1007/s43681-024-00426-5.
[2] McCulloch A, Loeser C, Bageas R. Does the viva matter? PhD student experiences of the oral examination and its contribution to examination outcome and researcher development. Assessment & Evaluation in Higher Education 2025;0:1–16. https://doi.org/10.1080/02602938.2025.2566652.
[3] Weber-Wulff D, Anohina-Naumeca A, Bjelobaba S, Foltýnek T, Guerrero-Dib J, Popoola O, et al. Testing of detection tools for AI-generated text. Int J Educ Integr 2023;19:26. https://doi.org/10.1007/s40979-023-00146-z.






Leave a comment