MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
exams
Recherche

Colleges Turn to Oral and Handwritten Exams as AI Disrupts Assessments

lundi 15 décembre 2025, 17:00 , par eWeek
In higher education, cheating has been a transgression that academic institutions have long prided themselves on detecting and deterring — until recently. 

The twin storms of the COVID pandemic and the arrival of ChatGPT in 2022 have wreaked havoc on longstanding assessment methods, as students have outsourced take-home exams, essays, and even coding assignments to AI platforms. 

Called cognitive offloading, the pervasive use of AI in higher education has created a crisis of trust for teachers and students alike. Faculty members are no longer confident that submitted work reflects a student’s understanding, while students navigate an environment where the boundaries between acceptable assistance and outright substitution remain unclear and inconsistently enforced. 

The scope of the problem was highlighted in a recent Inside Higher Ed-College Pulse survey of US college students, which found that 85% had used AI in their coursework, including for brainstorming ideas and preparing for quizzes, and that a quarter admitted to using AI to complete assignments outright. Nearly 30% said colleges should redesign assessments to be more resistant to AI use, including oral exams. 

With university professors vexed by their secondary roles as AI detectives and students buoyed by a rapidly evolving technology they have firmly embraced, the solution that has emerged is a melange of old-school assessment approaches to how students take tests, including the return to book-based handwritten exams, oral presentations, and device-free evaluations that prioritize explanation and reasoning over polished papers.

AI-free assessments

In a 2025 study conducted by academic publisher Taylor & Francis, researchers found that generative AI use in higher education has created a “wicked problem” that will require a diverse set of solutions. 

“Our findings demonstrate that the GenAI-assessment challenge exhibits all ten characteristics of wicked problems,” researchers noted in sharing their findings. “For instance, it resists definitive formulation, offers only better or worse rather than correct solutions, cannot be tested without consequence, and places significant responsibility on decision-makers.”

The researchers’ recommendation that educators be given institutional permission to develop their own evaluation systems to navigate AI-derived assessment hurdles has prompted several creative responses across higher education that speak to how student assessment itself is being reexamined in the age of AI. 

Oral exams, in particular, have become a popular way of sidestepping the AI problem. Whether in person or over video, they require students to explain concepts, defend their reasoning, and synthesize course material in real time, making it far more difficult to rely on ChatGPT or Claude to substitute for their own understanding.

The appeal of oral exams for professors is manifold. They allow educators to focus on comprehension and nuanced thinking, effectively getting them away from using unreliable AI-detection software that only adds another layer of frustration to their already complicated workload. They also, in many cases, improve the professor-student relationship by removing the adversarial cudgel of trying to prove misconduct after the fact. 

That said, oral exams are not a universal fix. Besides being difficult to scale for large classes, they are time-sensitive and place additional demands on faculty already stretched thin. Even proponents acknowledge they work best as part of a broader mix of assessment strategies rather than a wholesale replacement for traditional exams. 

In addition to oral exams, other workarounds are gaining traction. Some instructors have returned to handwritten, in-class testing using blue books, cutting off access to devices entirely. Others are redesigning assignments to emphasize drafts, annotations, and live problem-solving sessions that require students to show how they arrived at an answer. For some, shifting evaluations toward presentations, labs, and supervised collaborative exercises has been successful in weeding out the use of AI tools. 

New old ways of learning

Students themselves appear more receptive than expected. Many report preferring oral exams to traditional tests, saying they are more engaged and better able to demonstrate what they actually know. Some even use AI constructively, generating practice questions or mock interviews rather than submitting AI-generated work. 

As tech leaders like LinkedIn co-founder Reid Hoffman have argued, generative AI is exposing the fragility of homework-driven, easy-to-grade assessments. The push for colleges to derive harder-to-game evaluations that privilege reasoning, explanation, and real-time understanding will not likely result in a completely AI-proof system, but a more demanding one that forces higher education to be clearer and more honest about what learning is meant to show in an AI-driven world. 

Also read: AI majors surge nationwide as universities launch new degree programs and students and faculty increasingly rely on tools like ChatGPT.
The post Colleges Turn to Oral and Handwritten Exams as AI Disrupts Assessments appeared first on eWEEK.
https://www.eweek.com/news/colleges-turn-to-oral-exams-ai-disruption/

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Date Actuelle
lun. 15 déc. - 19:10 CET