By Derek Newton
Reposted from Forbes, with permission
Any teacher will tell you that the use of generative AI has significantly spiked student cheating. They use words like “massive” and “exponential” to describe the practice of students offloading their intellectual labor and academic responsibilities to generative AI chatbots such as ChatGPT.
While some teachers and schools are holding the line against AI-fueled cheating by checking work and enforcing penalties for unauthorized shortcuts, others have essentially surrendered to what the big AI companies say is the inevitability of generative AI — not just in academic settings, but everywhere. Educators who succumb to this marketing will reliably say that students aren’t cheating, they are learning to use the tools that workplaces will require. They say that, because students will have to use generative AI in their careers, schools should do more than just allow it, they should actively encourage students to use AI in their courses.
But that assumption may be wrong.
There’s already good evidence that job applicants and career workers may not use AI as much as we’ve been told. In fact, several examples indicate that using generative AI professionally may, in some cases, be counterproductive, or banned, in real work environments. Even where it’s not banned or restricted, the effectiveness of generative AI as a business tool may already be on the decline.
For example, the public roll call of companies that have reportedly banned or limited AI use is impressive. JPMorgan Chase, Verizon, Deutsche Bank, Bank of America, Citigroup, Northrop Grumman, Samsung, and Apple are all on the list. A year ago, a survey of business leaders found that more than one in four (27%) banned their employees from using AI at work.
While some companies are banning employees from generative AI, others are arming clients and customers with tools to unmask it or block it themselves, dramatically cutting its effectiveness.
As an education writer, I receive dozens of story pitches a day. Qwoted is a company that helps PR agencies connect with reporters and also helps reporters find good sources for their stories. Qwoted recently launched a tool for journalists that will, “Check pitches and responses for AI-generated content.” If I, and others, are less inclined to take AI bot words seriously, or hold a lack of human investment against a sender who is asking for mine, it won’t be long at all before those senders – the PR firms in this case – stop using AI to generate pitches.
Similarly, The Authors Guild has unveiled a certification process called “Human Authored,” allowing writers to verify that they personally wrote something. Having the badge on written work sends a message. So does not having it. Publishers, editors, and the public will learn to spot it and make decisions accordingly. As that happens, the value of AI text in writing will drop.
In the legal community, many lawyers and law firms have been sanctioned or fined for using AI to file legal documents, often caught because the AI invented cases and citations. In one recent example, lawyers were caught and fined for submitting an AI-generated legal filing, but the large law firm they worked for escaped a separate fine because the firm “had trained its employees not to use AI software” by relying on it to draft documents.
In another example, public, crowd-sourced information sites such as Yelp have been overrun with AI-generated content and reviews. In response, Yelp announced that it “has significantly invested in methods to better detect and mitigate such content on our platform,” again reducing the utility of creating the AI-text in the first place. If you can’t use it, there’s no reason to need to know how to create it.
It’s not just writers, PR firms, law firms, review sites, or financial companies that are chipping away at AI usage in business settings. At least one AI company has drawn a line too.
Anthropic, the generative AI company backed by Google, tells job applicants, “While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.” After applicants are asked to affirm that they understand, Anthropic asks them to write 200 to 400 words on their job desires — without AI.
In other words, even an AI maker has limited when and where AI is appropriate or helpful in a professional setting.
The trend is easy to see – in an increasing number of professions, you’d better have some, as Anthropic put it, “non-AI-assisted communication skills.”
Which brings us back to teaching.
If blocking or mitigating the use of generative AI at work grows, teaching students to use it will be less and less useful. If teachers are allowing or encouraging students to use AI as a writing assistant or writing replacement, they may actually limit the ability of students to perform in their careers – especially if letting them use AI in school limits their ability to write or communicate without it.
Jack Castonguay, a Hofstra University professor of accounting and auditing theory, recently told an industry publication, “We see the reliance [on AI] significantly when they have to give a presentation or take an in-person exam. It’s clear they have gotten to that point by using AI and can’t apply the logic on their own,” he said. “My bigger concern is [that] by such a reliance on AI they will lack the critical thinking and synthesizing skills that are still valued even with AI. To use a sports analogy, they are only bowling with gutter guards — what happens when those aren’t there?” he said.
Using generative AI may be a skill that can be learned and practiced. And it perhaps should be taught. But teaching students to use AI under the presumption that they’ll need it in their career may already be an outdated idea. Being a good AI prompter is likely to end up being a skill that’s good to have, but insufficient. Instead, learning how to communicate and write without AI will be the far more valuable job skill. Teachers, and their students, may be better served by focusing on learning, without the gutter guards.