By Derek Newton
Reposted from Forbes, with permission.
Mark Biggin describes his Q-SID team as “quite serious scientists” – a contention it’s hard to argue with. He earned his Ph.D. from Cambridge University. He’s taught courses like molecular biophysics at Yale and for the past 20 plus years has been a staff scientist at Lawrence Berkeley National Laboratory where he studies geonomics. In 2015, he was credited with helping to “resolve dispute about how gene expression is controlled.”He teaches and lectures at the University of California Berkeley and UCLA.
He’s an impressive scholar who says he was “so naïve” about the level of student cheating on online tests.
As a result, Biggin has some important things to say about how teachers are dealing with cheating, as well as a potentially powerful solution to one particularly challenging type of academic misconduct.
Cheating appeared on Biggin’s radar when he found students were copying exam answers from one another. This year, Biggin and his teaching assistants found that 50 of his 256 students were cheating by collusion – reviewing exam questions in small groups and “blindly copying” answers from one another. This is at an elite school, in an upper-level course that’s a well-known pathway to medical school.
“I was very shocked. Just shocked,” he said. “I could not believe people were doing this,” saying the situation made him “emotionally upset.”
He’s not anywhere near the first teacher to be surprised their students were cheating. But he immediately asked himself how big the problem was and if he could prove it. Turns out, as a top-flight data scientist used to studying genomes, he could.
“Slowly, over a month long period, we were looking at exam pairs or small groups and found some had colluded, some hadn’t,” he said. He said he “slowly got better and better at developing mathematical evidence of collusion,” in a process he described as “an unbelievable amount of work.”
Finding that 19% of his class had cheated by collusion and that the average benefit to cheating was an entire letter grade boost, motivated Biggin. “I was very impelled to protect the majority of students who were not cheating,” he said.
Biggin, along with graduate student Guanao Yan and Professor Jingyi Jessica Li at UCLA, developed “Q-SID” or Question Score Identity Detection, that can analyze exam answers and pinpoint collusion. Q-SID has a 0.3% false positive rate for placing students into Collusion Groups while identifying 50% to 90% of the students who have colluded on a given exam, the tool’s website says.
If it does, Q-SID is a promising weapon in the war for academic integrity. “This is a tool that’s very straightforward. It takes data you already have and in one minute you have a list of students who are likely to have colluded,” Biggin said. That’s a big deal.
Still, in academia nothing is easy and Biggin lamented that some teachers and some schools just refuse to use it, even though it’s free and proven to work. “Some schools don’t proctor, they have an honor code and one school told me they would never allow this tool to be used,” he said. The school would not even use Q-SID internally to learn what kinds of classes or assignments may be prone to misconduct, Biggin said.
And that’s where Biggin’s experience in developing Q-SID, and with cheating in his own classes, offers the most insight. While Q-SID can mathematically establish collusion cheating, Biggin says he found no evidence to support, “the idea that students, if you talk to them right, they won’t cheat.” It’s a concept that Biggin calls “just wishful thinking.”
In efforts to combat the rise in cheating that’s come with the shift to remote and online learning during the pandemic, schools have advised professors to shy away from test formats with defined answers, to communicate more clearly about expectations and limitations and to reduce student stress.
The thing is, Biggin did all that.
He reduced the pressure on his students by making his exams open book. He communicated clearly about what resources were and were not allowed during exams. He made his exams timed, so as to limit temptations and access to unauthorized sources. And his students still cheated.
It was only after he showed his students what Q-SID was and how it worked and conveyed that they were likely to be caught, that collusion cheating stopped. “Even in this large class we greatly reduced the number who collude,” Biggin said. In other words, the cheating prevention tactics others suggested didn’t work, but the risk of getting caught did.
“I have tried following the advice given by the academic center, including things they say work, like open book tests and those suggestions are not effective. We’ve shown that. We measured it,” Biggin said.
Biggin knows that other teachers won’t agree with his assessment and concedes their views are strongly held. “But those different views,” he says, “are not backed by data. Ours is backed by data. Those who say ‘if you just talk to students,’ they don’t have any evidence.”
“I’m talking about the world as it is. I measure. I see how faculty are teaching now and I respect them,” Biggin says. But thinking that tactics like changing the types of assessments will stop cheating is misguided. “It’s just arrogance,” he said. “I described and measured cheating. I measured what is happening, whereas people are describing what ought to happen. That advice is not working,” he said. “Those thoughts are deflections. It’s frustrating,” he said.
“They all want to think the best. And so did I,” he said. “But physicists are always telling us, if you have not measured it, you have not understood it. This measures it. Other theories about stopping cheating have no measures,” he said.
Whether Q-SID works is beyond my ability to assess. Some pretty smart people seem to think it does.
But I can assess whether teachers and schools should be using it – and anything else they can think of to deter cheating. They should.
Cheating solutions such as lowering the stakes of tests or having written assessments or projects replace objective exams are not enough. They are unproven, even untested. They are philosophy, not pedagogy. For every intervention and prevention tactic that’s theorized on the front end, schools and teachers would do well to deploy a detection tactic on the back. The reason so many won’t is because they do not want to know.