LanguagesLanguages:  

UNIVERSITIES ARE SOLVING THE WRONG PROBLEM: AI IS BREAKING OUR OUTCOMES, NOT OUR ASSESSMENT

UNIVERSITIES ARE SOLVING THE WRONG PROBLEM: AI IS BREAKING OUR OUTCOMES, NOT OUR ASSESSMENT
François Cilliers of the University of Cape Town Presenting on Artificial Intelligence at the HELTASA (un)Conference Hosted by Walter Sisulu University in East London.

The introduction of plagiarism detectors, redesigning tests and “AI declarations” to coursework is a futile exercise as universities scramble to catch-up to Artificial Intelligence.

That is the view of François Cilliers, a Professor of Health Sciences Education in the University of Cape Town, tabling his presentation at the HELTASA (un)Conference hosted by Walter Sisulu University.

Addressing a packed gallery, Cilliers said most universities think AI’s biggest disruption lies in assessments such as tests, assignments and exams.

However, argued Cilliers, the real problem is not the integrity of exams, but the integrity of learning itself.

“For years, universities have been waging war on plagiarism bots, chatbots, and generative tools, each new AI model triggering a fresh wave of panic and policy reform. This is a misdiagnosis that risks dismantling the very foundations of curriculum design,” said Cilliers.

Cilliers charged that AI is changing what it means to “know” and “do” so profoundly that many traditional outcomes are now outdated or meaningless.

“When students use ChatGPT or other tools to generate essays or solve problems, institutions panic and rush to “fix” their assessments. We bandage up our assessment and carry on until the next wave of AI comes in and breaks it again,” he said.

The crux of Cilliers’ argument is that academia is fighting the wrong war. Instead of asking how to stop students from using AI, educators should be asking what learning still matters in the age of AI.

“When assessments become the battlefield, curricula morph around defensive strategies instead of pedagogical purpose. The obsession with policing students risks producing “AI-distorted learning,” where outcomes are warped to fit the tools rather than the teaching,” Cilliers said.

He further added that by refocusing on outcomes, universities can distinguish between what must remain human skills rooted in empathy, ethics, real-world engagement and what can be enhanced, accelerated, or even replaced by machines.

To navigate this “uncharted territory,” Cilliers proposed a radical new framework which is a five-part Outcome Review Typology designed to help educators classify learning outcomes along an AI spectrum:

  • Human-Centric: Skills that demand “biological memory” and lived experience, such as community-based health interventions.
  • AI-Aided: Outcomes where AI acts as a supportive tool, not a substitute.
  • AI-Enabled: New learning frontiers made possible by AI, such as predictive analytics in undergraduate research.
  • AI-Dominant: Tasks where AI does most of the work, and humans shift to evaluating and applying the results.
  • Obsolete: Skills that have lost relevance entirely in an AI-driven world.

“The typology is not a fix; it’s a compass. It challenges universities to ask difficult, even existential, questions about the future of learning. What does it mean to “know” something when AI can know it faster? What does it mean to “create” when AI can generate?” Cilliers added.

For Cilliers, in this new reality, the universities that thrive will be those that stop patching their tests and start reinventing their outcomes. Because in the age of AI, the question is no longer how we assess, but why we learn.

By Sinawo Hermans

More Articles

Contact Us

Email Address: This email address is being protected from spambots. You need JavaScript enabled to view it.

Contact Numbers:
Mthatha: 047 502 2100
Butterworth: 047 401 6000
East London: 043 702 9200
Potsdam: 043 708 5200
Chiselhurst: 043 709 4000

Follow us on Social Media

facebook X black insta black youtube black linkedin black linkedin black