Skip to content

Learning Science: Artificial Intelligence in Courseware Can Enhance Student Learning

VitalSource researchers share new findings about leveraging the power of AI in education, win award for Best Open Data Set

Researchers from VitalSource Technologies, a leading education technology solutions provider, recently presented new insights into the ways that artificial intelligence (AI) can be deployed to enhance student learning through quality, affordable courseware.

VitalSource® has been studying the impact of “Learning by Doing” for years (also known as the “Doer Effect”), a learning science principle that proves students who engage in practice questions while reading new content have higher learning gains than those who only read the content. However, developing custom courseware, complete with the high volume of formative questions required to make it effective, is time-consuming and expensive.

VitalSource Learning Science Specialist Rachel Van Campenhout said, “We really wanted to figure out a pathway to democratize the Doer Effect, making this Learn-by-Doing approach more accessible to all students—not just those at institutions with greater resources to develop custom courseware. The focus of our research has been on utilizing AI to generate Learning-by-Doing more affordably in our courseware, and the findings are very exciting.”

Van Campenhout and VitalSource Director of Research and Development Benny G. Johnson, PhD, recently presented their latest findings at the 22nd International Conference on Artificial Intelligence in Education (AIED) and at Learning at Scale 2021 (L@S), where they won the Best Open Data Set Paper Award. The findings confirm AI can be utilized to automatically generate questions of the same level of quality as human-generated questions with respect to student engagement, difficulty, and persistence metrics.

Johnson said, “Our analysis, which is based on 786,242 total observations of student-question interactions, is the largest evaluation of the use of AI to automatically generate questions and then evaluate the quality of those questions through performance metrics and student data. Put simply: this hasn’t been done before. No one has been able to prove at this scale that automatic question generation can definitively enhance content and support learning gains.”

Using AI for automatic question generation has become a popular research topic, but as of a systematic review published in 2020 by Kurdi et al., few studies evaluate question difficulty and even fewer use student data from their natural learning environment. The recently presented papers at L@S and AIED evaluate courseware that mixed automatically generated questions with human-authored questions using student data—a unique situation and data set that provided great insight into the performance of these automatically generated questions.

Van Campenhout explained, “When discussing AI to generate questions, people often ask, ‘Are they good?’ What they mean is, ‘Are they as good as human-authored questions?’ Good is subjective, but in this research, we use empirical methods to try to answer that question.” These studies use a series of performance metrics—engagement, difficulty, and persistence—to compare automatically generated and human-authored questions. The analysis on each metric revealed no evidence that students perceived a difference in the origin of these questions or interacted with automatically generated questions any differently than questions that were human-authored.

In 2018, VitalSource acquired learning and data analytics platform Acrobatiq, Inc., expanding VitalSource’s ability to harness data science and artificial intelligence to enable greater student achievement. Since then, the company has been exploring how to leverage data science and AI across its platforms, enhancing the efficacy of its suite of tools and services and ultimately supporting higher academic achievement for all students.

Source: Businesswire