By Lisa Gray, Senior Consultant, Learning, Teaching and Assessment at PebblePad
The emergence of language generation tools like ChatGPT in Higher Education has sparked a debate about their impact, particularly on assessment. While some may see these tools as a challenge to conventional teaching methods, it’s important to view them as a chance to improve and transform assessment practices.
While the advent of AI tools like ChatGPT undoubtedly prompt a fundamental shift in the way knowledge can be acquired and disseminated through their ability to generate human-like text, it’s important to remember that these tools themselves do not understand concepts and contexts as we do but use their abilities to predict which words follow another to create a response that we, as the reader, assign meaning to.
But regardless of the mechanism by which they create such plausible responses, their potential should not be underestimated, particularly in terms of the potential disruption to assessment practices.
So, what are the implications of the arrival of such tools for the assessment landscape? Is it the end of essays as we know them and a return to face-to-face exams to avoid the risk of collusion and dishonesty? Or another factor tipping the balance for assessment reform in higher education and an opportunity to rethink how we assess our students to better prepare them for an uncertain and ever-changing future?
We argue here for the latter.
Navigating the evolving landscape of assessment
There has been much written on the need for universities to rethink assessment practices over the years, with questions over whether traditional exams and written essays are truly assessing the knowledge, skills and behaviours that are desired both as course outcomes, but also best preparing students for future success.
It’s certainly true that the skills required for a workplace that is being transformed by technology are everchanging. Machines are taking over not just automated tasks, but also those requiring thought and decision making. And there is clear evidence employers are also increasingly valuing skills, attitudes, aptitudes and behaviours over degree resultsi. So what does this all mean for how we prepare and assess our learners for success? Particularly in the light of tools that can generate such sophisticated responses to questions posed of them?
The case for improving assessment design
We would argue, now more than ever, for a move towards more authentic practices, assessing not just knowledge and facts, but the application of knowledge in context.
By taking the knowledge students have gained (through their course and other routes) and asking them to do something with that knowledge relating to the context of their studies and potential future career environment, we are asking students to engage deeply, be curious, think critically, to analyze and solve problems. These higher order skills are essential for success in life outside of their educational experience, using approaches that better reflect the ways that they will be assessed as they continue through their careers.
By moving to these more authentic approaches, educators can create more meaningful and relevant experiences for students, while also gaining a more accurate understanding of their knowledge and skills.
By also asking students to make sense of or think deeply about their learning as part of the assessment experience, for example asking them to plan how they will approach the task, to explore and critically analyze their sources and materials, to reflect regularly on their learning and plan for their next steps, and importantly to engage in dialogue with teacher, advisers or mentors along the way, we are also maximizing the learning opportunity, while minimizing the possibility of cheating.
And we can go further. By engaging students in discussions about learning outcomes – asking them to contextualize these outcomes and co-design learning activities and approaches to evidencing that learning – we are not only developing self-regulating lifelong learners able to conceptualize original ideas and think critically about the material, but also making it increasingly difficult for cheating to take place.
And why would students want to cheat in the first place? It’s important to remember that they do not often set out to cheat, it is often as a result of a lack of clarity or understanding around the purpose of a task. If we engage students in conversations around the purpose of assessment (i.e., enhancing their own development and preparing them for success), and make it clear what the expectations of them are in terms of collusion and plagiarism, the possibility becomes much less likely. Developing better ‘assessment literacy’ of both staff and students is an essential part of this picture.
What needs to be in place?
Changing assessment practices at scale is no easy task, particularly now with many university staff still recovering from the demands of teaching through the pandemic. But with the advent of these new tools presenting a very immediate problem for many current assignments, it is a task that can no longer be avoided.
Support will be needed to help busy staff understand the potential opportunities, risks and challenges these tools present, and to design the right solutions for them and for their students. Learning designers, educational developers, library staff and students themselves (as well as relevant professional associations) should be part of that journey – bringing to bear their expertise and experience to ensure the outcomes are the right ones.
A considered approach
It’s clear that a considered approach is necessary when it comes to the rise of language generation tools like ChatGPT in higher education. These tools present both potential benefits, for example creating content and freeing up time better focused on the development and assessment of crucial skills, and potential risks, that all require further examination.
These tools will only continue to advance and become more pervasive. So as the debate on their impact in higher education continues, it’s crucial to address the critical challenges these tools pose to traditional assessment methods now. Let’s use this opportunity to consider how we can provide more meaningful experiences for students, that better prepare them for success.
Interested in learning more? Register now for our exclusive webinar.
Lisa works directly with PebblePad’s executive team to bring value to the organisation’s customer base of nearly 150 HE institutions around the world. She has more than two decades of experience in the sector, having spent 17 years with renowned digital solutions and research provider Jisc. PebblePad – The Learning Journey Platform