As educators, we often hear about the importance of having discussions with our students regarding the use of generative artificial intelligence (AI) in their studies. However, it can be challenging to know exactly how to frame these discussions, what advice to give and where to start.
This is understandable because the use of generative AI in education is an ethical minefield. For example, with AI tools (such as ChatGPT) able to produce text, video and images based on user-given prompts, there are numerous implications in terms of how it could and should be used in student assessments. Nonetheless, generative AI is here to stay and we cannot avoid or delay adapting our approaches to accommodate it. This starts with the conversations we have with our students on how they can use AI responsibly in their studies.
This blog post aims to provide practical guidance for teachers on how to frame these discussions with students, so that we can have productive and dynamic conversations. By engaging students in a collaborative approach we can create meaningful conversations that empower students to navigate AI tools while maintaining academic integrity.
Embrace the learning curve
One undisputable observation of generative AI is its rapid, almost dizzying pace of evolution, with hundreds of new AI tools being added to websites like Futurepedia each day. If you’re feeling a bit overwhelmed, guess what? You’re not alone.
But here’s the thing – it’s okay not to know everything. Your students won’t expect you to be an AI expert, and you shouldn’t put that pressure on yourself either. What’s important is that you are there, ready to embark on this AI learning journey with them.
In admitting we’re not experts in generative AI, we show our students that it’s okay not to know everything and that learning is a continuous process. Embrace it and turn it into a strength. It’s okay not to have all the answers; what matters most is involving students in discussions about how and where generative AI can be used.
Conversations should be dynamic and collaborative
Having these conversations with your students will also provide you with a great opportunity to turn the classroom into a collaborative learning space.
By facilitating collaborative discussions around generative AI, you will hear first-hand about students’ lived experiences of AI and be able to encourage them to explore new AI tools, whilst reminding them of the importance of critical thinking in relation to how they apply these tools to their studies.
Working collaboratively with students to generate new ideas in relation to the use of AI will also enable us to use students’ views to inform Kent’s institutional narratives and guidelines around how we adapt to AI. We should remind students of their influence, as part of the University community, to contribute to our own institutional thinking.
Clear expectations are needed
Whilst collaborative conversations and partnership working will benefit both educators and students, there will also be times when we need to present our students with unambiguous guidelines.
So what advice should we as educators contribute to these collaborative discussions and what guidelines should we be asking students to follow in order to ensure we help them navigate the murky waters of generative AI use. This starts with setting clear expectations for each module and task.
Consider the use of generative AI tools like ChatGPT, Bing or Bard. When used responsibly, these tools can be enormously helpful providing initial structures for tasks, which students can then modify and improve upon. However, misuse can lead to ethical breaches.
For example, students can use AI tools to explore their topics, check their understanding and prepare assignments, but where its use is not permitted, student should clearly be told that they cannot include AI-generated materials in their final submissions. It is critical that students understand the rules in place as these are likely to differ from module to module and even from task to task. The e-learning team at the University of Kent are exploring the inclusion of a statement in Moodle to make clear to students whether the use of AI tools is permitted.
Where the use of AI tools are permitted, we may choose to ask student to clearly acknowledge their use and the extent of their application. For instance, students might state,
The use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. The prompts used include [list of prompts]. The output from these prompts was used to [explain use].”
Ensure students understand the shortcomings of AI
There are many misconceptions surrounding AI, and it is crucial to correct these so that students can understand how to use the tools responsibly.
AI tools, like ChatGPT Bing and Bard, are not primary sources of information. Instead, they are language processing models. Whilst they can provide useful information and help generate ideas, they do not independently verify the facts, understand context, think, or feel like humans. What this means is that the text generated by AI tools can sometimes be factually incorrect. Therefore, it is crucial that students critically review all outputs generated by these tools rather than blindly accept the content as correct.
One way that might help to communicate this to students is to relate ChatGPT to the well know site Wikipedia. You could suggest that students imagine ChatGPT as their first steppingstone, similar to how they might use Wikipedia, to provide a broad overview of a topic to help understand basic concepts. However, when it comes to academic assignments, higher standards are expected.
Just as citing Wikipedia will not score well in an assignment, using ChatGPT as a primary source will not win top marks either. Both serve as useful springboards for research, but they are not replacements for in-depth, peer-reviewed, scholarly sources. And copying and pasting content is never acceptable; students should already know this but it won’t hurt to remind them in the context of AI generated material.
AI and academic integrity
Finally, teaching students about AI in the context of academic integrity, as well as more generally, is vital. We need to create environments that emphasise and nurture academic integrity, reducing motivations to breach it. Some teaching points could include the importance of claiming authorship only over original work, transparency in how student work is produced, and the shared values of academic integrity among university communities.
The integration of generative AI into the classroom isn’t an easy topic to navigate, but through open discussion, clear guidelines, and support, we can ensure students are prepared to handle generative AI in an ethical and constructive manner.