-
With Endorsement, Trump Clears Oklahoma Senate Path - 28 mins ago
-
TikTok Investors Set to Pay $10 Billion Fee to Trump Administration - about 1 hour ago
-
Becerra blasts USC and ABC for excluding candidates of color from gubernatorial debate - 2 hours ago
-
Professor Reveals Shocking Reason Students Are Intentionally Writing Poorly - 2 hours ago
-
The Michigan Synagogue Attacker Was a Quiet Restaurant Worker - 2 hours ago
-
Trump administration orders restart of California coastal oil drilling - 2 hours ago
-
The Fall of Noma’s René Redzepi Reverberates in the Restaurant World - 3 hours ago
-
How to Watch Kansas vs Houston: Live Stream Big 12 Tournament, TV Channel - 3 hours ago
-
Protesters Accused of Antifa Ties Found Guilty of Support for Terrorism - 3 hours ago
-
Packers Criticized for ‘Not Being a Flashy Team’ in Free Agency - 4 hours ago
Professor Reveals Shocking Reason Students Are Intentionally Writing Poorly
Students who don’t want to be flagged for using AI to write their college essays are turning to a counterintuitive strategy for getting a good grade: Writing poorly on purpose.
In a post on the Reddit forum r/professors, Dr. Sam Illingworth—a professor at Edinburgh Napier University in Scotland—said that he has noticed a disturbing trend with his students.
According to Illingworth, his students intentionally include typos and bad grammar as a way to make sure that detectors don’t flag their work for AI.
“Some are running their (human-written) work through ‘AI humanizer’ tools just to avoid false positives,” Illingworth wrote in his post.
“We’ve created a system where competent writing is treated as suspicious.”

‘The stakes are too high’
In an accompanying blog post, Illingworth noted multiple instances of students who were penalized incorrectly, all of which had major consequences to their studies.
Illingworth pointed to a 2023 study of 14 different AI-detection systems conducted by an international team of researchers.
The study found that none of the 14 AI-detection systems reached 80 percent accuracy, with researchers finding “serious limitations” in AI-detection systems and deeming them “unsuitable” for sniffing out AI cheaters in classrooms.
“Our findings strongly suggest that the ‘easy solution’ for detection of AI-generated text does not (and maybe even could not) exist,” the researchers wrote.
AI-detection tools are especially bad at identifying human-written work when the speaker is a non-native English speaker.
An April 2023 study conducted by Stanford University found that 61 percent of essays written by non-native English writers were flagged by seven different AI-detection tools, and 97 percent were flagged by at least one AI detector.
“The detectors are just too unreliable at this time, and the stakes are too high for the students to put our faith in these technologies without rigorous evaluation and significant refinements,” senior author of the study James Zou warned.
Institutional Prejudice
In an email to Newsweek, Illingworth said his biggest concern with the tools is the bias.
“The false positive rates are bad enough on their own, but when those false positives fall along lines of race, nationality and first language, we are not talking about a flawed tool,” he explained.
“We are talking about institutional prejudice, automated and given a confidence score.”
Illingworth added that he does not believe he can reliably spot AI writing by eye detection, and the technology is good enough at this point that “basing academic consequences on [eye detection] is dangerous.”
‘No training’
“AI has genuine uses in education—as a thinking partner, a drafting tool, a way to stress-test arguments,” he continued.
“The issue is not whether students use it but whether they understand what it is doing and where it fails. That requires critical AI literacy, not prohibition.
“Most staff have had no training in how to teach with or about AI. We are asking them to police something they have not been equipped to understand.”
‘A dead end’
For his part, Illingworth believes solutions are “all on the educator side”—redesigning assessments and tests and investing in education for staff members.
Importantly, he also believes that educators need to “stop framing this as a discipline problem.”
“Students are adapting rationally to the tools available to them,” he said. “The question is whether we help them do that critically and ethically, or whether we just try to catch them.
“Detection is a dead end.”
Source link






