Skip to content

EdTech Leader Dr. Mitch Colver, Associate Provost for Engagement & Retention at American Public University System, Says This Will Matter

Dr. Mitch Colver serves as Associate Provost for Engagement & Retention. In this role, he focuses on professional empowerment, organizational efficiency, and student success. As a member of the Academic Senior Leadership team, he collaborates to empower success across students and approximately 1,600 remote faculty.

Dr. Colver has been at APUS since 2023 and oversees three service units within the Office of the Provost: the Center for Teaching and Learning, the Office of Curriculum & Assessment, and Academic Insights & Success. He has deep experience in change management, student success analytics, and autonomy-supportive leadership. He frequently champions the idea that increased intentionality amongst faculty, staff, and executives is a critical aspect of leading 21st-century institutions toward success.

Prior to APUS, Dr. Colver served as Vice President for Analytics Community Practice at Civitas Learning where he worked with over 60 institutions in using predictive analytics to drive student success. He was also the founding Director of the Center for Student Analytics at Utah State University, a unit focused on empowering student retention through analytics, collaboration, and institutional optimization.

Dr. Colver earned his Ph.D. in Education from Utah State University, his M.S. in Psychology from Eastern Washington University, and his bachelor’s in Psychology/Music from Brigham Young University-Hawaii. His research has been featured in several media outlets, including Popular Science, Discover, Slate, Smithsonian, New York Magazine, and, internationally, on Radio BBC.

Dr. Mitch Colver, Associate Provost
1. Tell us about your company and the problem it solves

American Public University System is a fully digital institution that delivers higher learning to

roughly 89,000 students globally. With nearly 1,800 faculty members across four academic schools, we provide quality education to learners in flexible settings and unique circumstances. Roughly two-thirds of our students (63%*) are active-duty military personnel stationed across most all time zones. To us, bringing meaningful, career-tethered learning experiences to these students in situ is our primary mission.

Our scale quickly forces clarity on issues many institutions grapple with, including the proliferation of student misuse of generative AI (GenAI). At this size, winning solutions rarely rely on boutique efforts. Instead, we think about everything systemically.

The systemic problem is the possibility that students inappropriately use GenAI not because misuse is inevitable, but because appropriate use is not always clear.

*As of Dec. 31, 2025, 63% of our student body is active-duty students.

2. What is the challenge educators face today that is fixable?

One of the more fixable challenges in the era of GenAI is helping students to see that when they misuse GenAI in a learning environment, rather than getting ahead, they are actually being left behind.

As revealed in a peer-reviewed study by MIT, the hidden learning erosion known as “cognitive debt” that comes from transactional use of GenAI is not always on students’ radar. Many educators react by restricting access, policing usage, and prohibiting utilization. When students misuse GenAI, they actually spend more time confused about their courses and less time learning.

For myself, like other colleagues and administrators at APUS, I had a different internal reaction: “Yes,” I thought, “this issue of cognitive debt makes sense. If you don’t show students how to use technology, they can and will use it in ways that cause self-harm. I bet research also shows

that people who bash themselves in the thumb with a hammer (because they were never trained to use it) spend more time in pain and less time driving nails.”

The real question is: How are we teaching students to use GenAI properly? The opportunity is to teach our learners to build the durable skills that employers now expect our graduates to bring with them into their careers (including productive use of GenAI!).

In other words, prohibition is not pedagogy.

Detection tools, policy statements, and academic integrity warnings have their place, but they are not instructional design. When they become the primary response, students likely turn to unsupervised, black box usage (learning about GenAI independent of the structured support that institutions should be eager to provide).

And when these students inevitably discover GenAI can produce passable outputs, they begin to optimize around measures like speed and completion, not the depth, nuance, understanding, and disciplinary relevance we try to produce in the academy.

Our role, as an institution, is to design environments where students are taught appropriate applications of these new tools, that supercharge learning and unlock aspects of human functioning previously untapped.

The problem isn’t the hammer; it’s handing it to someone who hasn’t been taught how to swing it.

3. What is the challenge educators face today that will persist?

Cheating with GPT isn’t a new problem, just an accelerated one. Even with perfect academic policies, widespread monitoring, and robust proctoring, there will always be a temptation for students to collapse learning into a quest to get the right answer rather than to become a new version of themselves through productive struggle (are we even making it clear to them that that is the point of the academy?).

Since GenAI makes it increasingly easy to replicate the artifacts of productive struggle (e.g. a 20-page paper), institutions that focus on AI prohibition while keeping their 20 to 30-year-old rubrics in place will actually be doing students more harm than good.

The evolution of course material is part of the answer, and incorporating GenAI into the learning process doesn’t have to be that hard of a pill to swallow.

We recently had a compelling example of a student who was determined to use GenAI in the learning process.

Instead of asking GPT to “write my paper,” the student said, “Help me write a prompt that will force you to help me write my own paper with your help but without cheating.”

And what the GenAI built was impressively sophisticated. First, GPT asked the student to provide the assignment, rubric, and readings. Then, realizing the purpose of the assignment was critical analysis and organization of thought, the AI kicked off a series of interview questions, asking the student to reflect on the available readings first. One question at a time, the GPT essentially forced the student to write out much of the text that would be ultimately used in the final product.

Dr. Mitch Colver at Commencement 2025 for Engagement & Retention

Once the system had exhausted this line of questioning, the learner and the machine worked together to settle on a viable outline and select a single thesis that the student felt was most representative of the main argument they wanted to make.

Stream of consciousness first, then organization, then academic decision making. These are hallmarks of real learning.

What’s even more compelling is that the student was able to provide their faculty member with the entire chatlog as evidence of the productive struggle we truly want to see our learners engaged in. The chatlog shows a visible chain of student reasoning, summarizing, critiquing, comparing against sources, and testing the student’s ideas.

The record shows that, although augmented by GenAI, it was the student who did the lion’s share of the work arriving at a position of their own—a defensible argument they wanted to make in the paper they submitted. Did learning occur? Yes, absolutely.

Was the learning process augmented by GenAI? Yes, and in ways that any trained educator would find it compelling.

The question is whether our faculty are prepared to rebuild assignment instructions and rubrics to be compatible with this kind of creative augmentation. As cheating is nothing new, innovation of the curriculum to maintain relevance in light of new technologies has always been an evergreen activity.

4. What areas of education are being overlooked?

With any new technology, if the problem isn’t the hammer but instead the absence of training, then the most overlooked area in education right now is teaching students how to think with GenAI, not just how to delegate their work to it.

We’ve already seen what happens when that training is missing. Rather than learning to use the new tools transformationally, students simply resort to using them transactionally. And that’s where the cognitive debt that MIT warns us about accumulates (and which students

will then take into their careers—with disaster!).

And this is why our University does not believe the solution is primarily about prohibition. Instead, we actively encourage our faculty and students to incorporate GenAI into their learning experiences in legitimate, transparent ways. In fact, we’ve already seen hundreds of our courses rewritten to include transparent use of GenAI as a requirement for completing the course work.

In our University’s English courses, students use GenAI to rewrite classic short stories (you know, Edgar Allan Poe or Hermann Hesse). With these new/old texts in hand, they then compare the AI output to the original great writing with a scrutinizing eye, asking which version appeals more. And, not surprisingly, students all draw the same conclusion: “Wow, GenAI output is relatively bland, flat, and tired compared to the great authors’ writing within these classic short stories!”

And this is when their well-trained English instructors pounce with an incredibly important, intellectually nuanced set of questions: “So what do you think your writing sounds like when you ask GenAI to do work on your behalf? Bland? Uninformed? Soulless? And if you practice that in

college, how is your writing going to sound when you get into your career?”

It’s these kinds of exercises that prompt students to learn the strengths and limitations of GenAI, and prepare them for how to leverage these tools.

And these durable skills of AI interrogation lay the foundation of an emerging skillset that I’ve coined and discussed elsewhere, and which makes use of GenAI in the academy more likely to go well: Socratic Safeguarding.

Socratic Safeguarding is the practice of designing AI interactions so that learning remains visible, effortful, and owned by the user. It ensures GenAI is used to strengthen thinking—not replace it.

Socratic Safeguarding does this by requiring users of GenAI to deploy a set of four durable skills that prevent AI misuse and the associated Cognitive Debt that can emerge:

  1. Demystify the strengths and limitations of GenAI.
  2. Develop healthy skepticism for GenAI output (“Trust by verify”).
  3. Practice proactive self-awareness while using GenAI (aka “Metacognitive Hygiene”)
  4. Prioritize transformation over transaction

In short, these skills are grounded in the reality that, with GenAI, most users still expect way too much from the technology and not enough from themselves. And without that awareness, students will continue swinging the hammer of GenAI blindly, producing output, yes, but not

building anything within themselves that lasts, which is the point of higher education.

5. What do you foresee will be a challenge in three to five years?

The most overwhelming challenge that will emerge, particularly if we fail to show our students the joys of appropriate GenAI usage, is a generation of learners who were allowed to academically self-harm using GenAI rather than transforming themselves for the future.

The distinction between traditional and AI-supported work will fade. Prohibition and detection will become increasingly irrelevant as markets ask less about how something was made and more about whether or not it serves a purpose at scale. The question will no longer be: “Did they use AI?” but “Did they accomplish something worthwhile?”

Sadly, if we fail to adapt AI in the next few years, the cognitive debt amassed by students will come due with interest. The successful institutions of the future will not be those that restricted AI, but those that redefined learning in its presence. AI is not going away. And neither is the need for humans who can properly use a variety of tools, from hammers to GenAI, to function productively in a complicated world.