By Derek Newton
Reposted from Forbes, with permission
Though it may seem like a long time ago, it was just this January that Dr. Claudine Gay resigned as President of Harvard University.
As you may recall, the circumstances were partisan. But one of the proffered reasons for her departure was accusations of academic malpractice, the appearance of plagiarism in some of her scholarly work. Opponents of her presence, policies, practices combed her dusty written papers for evidence of academic misconduct.
Leaving Gay, and Harvard, and elite institution-hating out it, if the Gay/Harvard situation is how we’re collectively going to handle academic misconduct allegations from now on – in public, with public judgement, and as gotcha politics – absolutely no one is ready for what’s coming.
Given that the plagiarism attacks on Gay seemed to work, scouring the academic record of anyone influential, powerful, or famous is likely to become standard practice. We’ve already seen the back-and-forth start.
But an increasing number of accusations is a problem, not the problem.
The problem will come when generation generative AI graduates and starts earning promotions.
As today’s students grow into tomorrow’s business, government, or social leaders, accusations of boring old plagiarism will be dwarfed by accusations of using AI-created text to deceive and cheat. After all, plagiarism and AI deception are the same sin – faking the work and taking the credit.
Among current students, an average of 12 different surveys conducted over the past year shows that nearly four in ten (39%) admit to using ChatGPT or other generative AI on academic work. Because surveys about illicit or disfavored behavior are known to undercount it, the average use of AI in academic work is likely much higher.
Turnitin, the company that schools have used for more than a decade to detect plagiarism, also has a widely used tool to detect text created by AI. In April, the company announced that in just one year, their AI detector found at least 22 million student papers containing at least 20% likely AI content, and more than six million papers in which the AI content was a shocking 80% or more. That’s more than 16,000 papers every single day in which at least 80% of the material was likely produced by AI, and that’s just among the schools using Turnitin.
That’s a ton of AI being used in today’s classrooms. In most of those, submitting generative AI as student work, or without at least citing it, is prohibited. It’s rightly considered cheating. But teachers and schools have been reluctant to admit it, let alone address it.
Eric Anderman, an interim dean at Ohio State University, Mansfield campus, told US News, that most instructors underestimate just how rampant the issue of AI cheating is. “We think we’re underestimating it because people don’t want to admit to it,” he said.
At Central Michigan University, Brad Swanson, vice president of the Division of Student Affairs, recently said of the school’s graduate students, “One in four theses or dissertations are having to be sent back to the student because of plagiarism or copyright violations,” something he modestly described as, “a relatively significant problem.”
Currently, schools seem blind to what this future holds, content to let students use AI willy-nilly, as most schools have been reluctant to create and enforce any kind of strict policy around using AI in concert with their academic integrity policies. Some schools such as Vanderbilt University and the University of Texas, at Austin, have even actively unplugged their AI detectors, leaving teachers, and the schools themselves, largely blind to AI use and sending the message that using AI on schoolwork won’t be too heavily investigated or contested.
A spokesperson for the University of Texas said their decision to turn off their AI detection system was based in part on doubts about the accuracy of those systems and that using them would be “more likely to clog our system with false accusations than to catch people using AI inappropriately.” And, the spokesperson added, the school has “worked hard to incorporate AI into assignments so that students have an opportunity to learn to use AI tools as part of their work.”
But that’s the problem.
Because most schools are leaving AI use policies to individual instructors, and some educators encourage students to use AI bots in their assignments, as the University of Texas has done, millions of students are legitimately and appropriately using AI in their academic work and research every day.
The problem is that the Gay/Harvard debacle showed that the public cannot make, or does not care to make, any distinction between appropriate use of non-authentic work and red-letter academic fraud. Running tomorrow’s AI detectors on today’s work – legitimate or not – will yield a bevy of accusation-rich hits that are certain to look like, and may very well be, cheating.
In other words, whether they’re cheating or not, nearly every student who is using AI in their college work is building and burying their own career landmines, ready to explode as they attain any level of visible success. As students use AI and graduate, they are one motivated adversary, one controversial public statement, away from professional ruin.
In fact, dragging someone into the public square over cheating allegations will, in all likelihood, prove very easy because, unlike plagiarism cases, detecting the use of AI in written work is opaque. Generative AI is a black box. The technology that searches for AI is a black box making judgments about a black box. And while good, purpose-built AI detectors have repeatedly proven reliable and accurate, there are dozens of products that claim to detect AI text but do so very poorly, which means that the same paper could score 10% likely AI generated from one detector and 90% from another. Consequently, anyone motivated to allege academic malpractice by AI could shop multiple detection systems for the answers they want.
Once again, it’s unlikely that the public will understand, or care about, the distinctions.
But it’s not just cheaters who will be toppled and humiliated by academic misconduct accusations. Schools will be hurt too as their high-profile graduates get the Gay/Harvard treatment over and over again. It will be an uncomfortable place to be, especially for schools such as Texas, which recently fought for and won the right to revoke awarded degrees when academic misconduct is detected.
Could schools be overrun by dozens, hundreds, even thousands of cases of alumni with evidence of AI use in their academic portfolio? With activists, donors and alumni clamoring for action such as degree revocation?
Possibly. It’s not as though we have not seen previews of that already.
Whether they’re looking away from misconduct or carelessly encouraging their students to litter their academic work with AI, schools and professors are setting up future business, civic, or political leaders for the Claudine Gay treatment and putting themselves in very awkward predicaments. Today’s schools are giving us an entire generation of people who will spend a lifetime building a career and have plenty to contribute, every single one poised for public undoing by accusations of academic misconduct at any time, for any reason. Or for no reason at all.