Why exams are key to our children’s education
May 27, 2025

Artificial Intelligence is a loaded gun that can backfire just when you least expect it. Writing up a report the other day, a friend asked ChatGPT to summarise some of the points she needed to make. It responded rapidly — perhaps a little too rapidly — and included an intriguing piece of information she knew nothing about. Really? she asked. Fascinating, where did that come from?
She double-checked. There was no evidence for it at all. AI, it seemed, had made it up. Eager to please, it had jumped to a conclusion, inserted an idea that seemed to make sense, and decided “what the hell? let’s keep it in.” I gather it’s known as “hallucinating” in the trade. At least that’s what AI told me via Google: “Yes,” it said, “AI systems, particularly large language models, can sometimes ‘hallucinate’, meaning they generate incorrect or misleading information that appears plausible but is not actually true.”
Yet another reason for caution in an area of research that is causing palpitations in journalism, business and, of course, the academic world. As The Times reported last week, AI use is now so widespread among students that checking whether work is original or not has become “a profound challenge” for universities. “If vast numbers of students are using chatbots to write, research, code and think for them, what is the purpose of a traditional education?” said the report.
As examiners comb through the papers set for Highers in Scotland, they are watching not just for accuracy and originality but the tell-tale signs that what they are reading owes more to an alien intelligence than the keen perception of a bright pupil.
Exams, of course, test not just for acquired knowledge, but the ability of students to think on their feet — with the internet blocked, the only source of information is the old-fashioned human brain. For school or university work, however, AI has become a standard tool, giving instant access to information that took a previous generation weeks to accumulate. I envy the ability of the modern student to complete that research in minutes. It may indeed challenge traditional ideas about education, but does it necessarily undermine them?
I turned, of course, to my grandson, who has just sat his Highers. He seemed phlegmatic. AI, he said, has become a standard tool in searching for statistics and sources, but not one to be entirely trusted. “It’s liable to make statistics up, so you’ve sometimes got to double-check things,” he said. He drew the line at getting it to write his essays: “I don’t think of it as something to do work for me, instead more as something that can enhance the work I do, to make it more detailed and advanced.”
If that was the end of the AI challenge, we could simply conclude this is no more than a speeded-up version of the plodding research work most of us grew up with. It is not that simple. My grandson’s English course involves writing a text of 1,000 words, which counts for 20 per cent of the final grade, and because AI systems are becoming increasingly adept at imitating styles and fooling examiners, the SQA (which runs Scotland’s exam system) is finding it ever harder to spot the cheats. The original solution was to use an AI programme to detect another AI-engendered piece of work, but now there are models that can alter text to strip out suspicious word patterns and produce a plausible writing style that looks entirely genuine.
Already the SQA is considering dropping the pre-written essay component of the exam — but this is only the start of it. How does a teacher judge a student’s coursework, spot the genuine talent or call out the obvious chancers? For this I consulted an academic friend who has now accumulated a body of first-hand experience in a field that is altering the whole way in which modern education — whether at school or university level — is conducted.
She concedes that among her students are some who are capable of “winging it” and whom she may not be able to detect. As she points out, however, there has always been a small percentage of wily chancers who have plagiarised written material and got away with it; she doubts that percentage, armed by ChatGPT, has greatly increased.
The Times cites research by the student accommodation company Yugo, which suggests that, while 43 per cent of UK university students are using AI to proofread academic work, and 33 per cent to help with essay structure, only 2 per cent out of the 2,255 students questioned, admitted they used it to cheat on coursework. To which the obvious response is to raise a questioning eyebrow; I doubt if a seasoned cheater is going to confess quite so readily. What is more important is to accept that AI is now an integral part of school and university life, and to steer a course between the hype and the fear — between those who see AI as offering an educational panacea, and those who are shocked by the prospect of traditional teaching being rendered irrelevant.
“I know a lot of people are concerned that it’s going to damage the creative process,” said my university source, “but we must try and steer people towards using it to create efficiencies, without losing the necessary human elements — the creativity, the conversation, the relationship building.” That is down to the student to appreciate and the teacher to encourage.
How we learn is changing, but what we do with what we have learnt is something else altogether. ChatGPT may be a friend (albeit an unreliable one) at school or university, but he (they?) will not be there for life. Sooner or later the AI crutch has to be discarded, and then we are on our own again.
Click here to explore this article's original source for more.