We seem to keep circling back to the same questions about AI and assessment. Can students use it? If they can, how should they acknowledge it? If they cannot, how do we make that clear?
These questions matter. Students need clarity, and staff need confidence that expectations are being communicated consistently. Clear guidance at the module level is an important step forward.
But I wonder if we are focusing too much on the point of submission, and not enough on the weeks of learning that come before it.
HEPI and Kortext’s 2026 Student Generative AI Survey found that 95% of students use AI in at least one way, and 94% use generative AI to help with assessed work. It also found that students are increasingly using AI for core academic tasks, including explaining concepts, summarising academic material and structuring ideas.
That does not mean every student is using AI well. It means the question has changed. We no longer need to ask whether AI is part of students’ study habits. It clearly is.
The more important question is whether we are helping students use it in ways that support learning, rather than quietly outsourcing the thinking we want them to develop.
Students need practice, not just permission
A statement on a module can explain what is permitted in an assessment, but it cannot teach students how to use AI well. It cannot show them how to compare responses from different tools, when to use AI to explore a topic, when to switch into a more Socratic study mode, or when AI might be shaping their thinking before they have had a chance to form their own view.
If AI only appears in a module as a rule, a warning or a line in the assessment guidance, students may learn that it is something to manage quietly.
That does not mean students are acting dishonestly. Many will simply be uncertain. Some may not know where the boundaries are. Some may feel anxious about admitting they have used AI at all. Others may use it regularly but never discuss whether it helps them learn.
This is where we risk creating a hidden curriculum around AI. Some students will develop confidence through trial and error, informal conversations with peers, or experience outside their course. Others may be left behind, not because they are less capable, but because they have had fewer opportunities to practise.
That does not feel fair, and it does not build trust.
One useful place to address this is in seminars. This does not mean simply showing students an AI-generated answer and telling them whether it is good or bad. The stronger approach is to get students using AI themselves in a structured way.
For example, students could enter the same prompt into different AI tools or models and compare the responses. Which answer is more useful? Which is more accurate? Which sounds convincing but misses something important? Which response feels polished but does not really engage with the discipline?
They could then adapt the prompt. What happens if they add more context, ask for a particular theoretical perspective, or ask the tool to challenge its own answer?
Students quickly see that changing the prompt, the tool or the context changes the answer. More importantly, they see that they still have to judge whether the answer is any good.
AI should not always be the first voice in the room
There is another issue here that I think is worth paying attention to.
If students always use AI at the start of a task, it can shape how they think about the topic before they have had a chance to form their own view. The tool might foreground particular themes, ignore others, or present one way of understanding the issue as if it is the obvious starting point.
That does not mean students should never use AI early in their work. Sometimes it can be helpful for mapping a topic, generating initial questions or suggesting search terms. But students need to understand that using AI first is not neutral. It can influence the frame they then work within.
This is not a criticism of students. Similar concerns have been reported in professional contexts too. A recent study in NEJM AI found that physicians showed automation bias when exposed to incorrect LLM recommendations, even when they used the tool voluntarily and had prior AI training. The point is that all of us need practice in recognising when a confident AI response may be shaping our judgement.
This could become a useful seminar activity in itself. Before using AI, students might first write down what they already think the topic involves, what questions they would ask, and which sources, theories or perspectives might matter.
They could then ask AI the same question and compare the response with their own starting point.
What did the AI foreground? What did it ignore? Did it change how they understood the topic? Would they have approached the task differently if they had started with the reading, the data, or their own ideas?
Sometimes, the most important learning design decision is not what prompt students use, but whether AI should be the first voice in the room.
The learning goal should shape the AI use
There is not one correct way for students to use AI. The right approach depends on what we are trying to help them learn.
If the goal of the seminar is to explore a topic, explore different viewpoints or generate initial questions, then a standard prompt-response approach might be useful. Students might ask AI to suggest search terms, outline different perspectives or identify areas of disagreement. The learning then comes from checking and refining those outputs against credible sources and disciplinary knowledge.
If the goal is to understand a new concept, theory or method, a more Socratic approach may be better. Instead of asking AI to explain the concept and stopping there, students can ask it to test their understanding, challenge their assumptions, give them examples to interpret, or ask follow-up questions until they can explain the idea themselves.
For example, a student might start with:
Explain this concept in simple terms and give me an example.
That might help them get started. But they could then switch mode:
Now act as a tutor. Don’t explain the concept again. Ask me questions to test whether I really understand it. If my answer is weak, ask a follow-up question rather than giving me the solution straight away.
Those two uses of AI support different kinds of learning. Sometimes students need an explanation. Sometimes they need to be challenged. Sometimes they need to compare responses. Sometimes they need to step away from AI altogether and think through the problem themselves.
That is the judgement we should be helping them develop.
This also helps with assessment
There will still be assessments where AI use should be limited. There will still be tasks where independent work matters. There will still be situations where students need to show what they can do without AI support.
But those boundaries are easier to understand when students have already seen AI being used openly and critically as part of their learning.
An academic can say:
We have used AI in seminars to test ideas, compare explanations and critique responses. But in this assessment, I need to see how you independently build an argument from the sources, because that is the skill being assessed.
That is very different from simply saying:
AI is not permitted.
The first explanation gives students a reason. It shows that the restriction is linked to the learning outcome and the purpose of the assessment, rather than a general assumption that all AI use is dishonest.
This also matters beyond university. Students are likely to enter workplaces where AI tools are already being used to draft documents, analyse information, summarise meetings, generate ideas and support decision-making. Employers will not just need graduates who can use AI quickly. They will need graduates who can use it responsibly, explain their choices, recognise its limitations and understand when human judgement matters.
That kind of capability will not develop from an assessment statement alone. It needs practice, and it needs context.
Academics should not have to work this out alone
It is also important to say that academics should not be expected to do this on their own.
Embedding AI into a module is not as simple as adding a chatbot activity to a seminar plan. It requires judgement about the purpose of the module, the skills being developed, the assessment design, the risks of over-reliance, and the kinds of AI use that make sense within a particular discipline.
Staff need practical support, examples they can adapt, and space to talk through the grey areas. In some cases, this may include AI specialists, learning designers or digital education teams helping to design or co-run seminar activities with academic colleagues.
If we want responsible AI use to become part of learning, we need to support it through teaching, not just policy
Clear assessment statements still matter. Students need to know where the boundaries are. But if that is all we do, we risk missing the bigger educational opportunity.
Students are already using AI. The question is whether they are learning to use it well.
That is how we move from hidden AI use to taught AI use.