Most universities now provide students with access to AI tools in some form. Increasingly, however, we are seeing institutions adapt how those tools respond, moving away from giving direct answers and instead prompting students with questions, reflection, and guidance.
On the surface, this makes sense.
But it raises an important question. Are we teaching students to think, or deciding how they are allowed to think?
A growing number of institutions are exploring what might be described as “Socratic AI”. Rather than providing direct answers, these tools ask follow-up questions, break problems into steps, and prompt students to explain their reasoning. The intention is clear and aligns with well-established principles of effective learning. If students are guided to think rather than simply receive answers, learning is more likely to stick. This reflects approaches that focus on students working through problems themselves, rather than being given solutions. I am strongly in favour of this. It is essential for meaningful learning and should play a central role in how we design learning.
The challenge emerges when this approach shifts from being an option to being the only option. If a student asks for a clear explanation and the system refuses to provide one, instead responding only with questions, we are no longer just supporting learning. We are shaping, and potentially restricting, how that learning takes place. What starts as a pedagogical choice becomes a form of control embedded within the technology itself.
What students are actually telling us
In conversations with students at Kent, and many colleagues across the sector, a consistent and perhaps unsurprising tension has emerged. Students are not unaware of the risks associated with AI. Many are concerned about over-reliance, surface-level engagement, and the potential impact on their thinking. At the same time, they want flexibility. They want to be able to move quickly when they need to, to get a clear explanation when they are stuck, and to decide how they use these tools depending on the task in front of them. They are not asking for less thinking. They are asking for more agency in how that thinking is supported.
There is a risk here of over-correcting. We have moved quickly from concerns that AI makes it too easy for students to get answers, to solutions that imply students should not be given answers at all. But learning is rarely that binary. There are moments where a well-timed explanation can unlock understanding, and others where being prompted, challenged, or slowed down is more valuable. Locking students into a single mode of interaction, even one grounded in sound pedagogy, risks replacing one limitation with another.
There is also a very practical risk that is easy to overlook. If institutional AI tools are too restrictive, students are unlikely to stop using AI altogether. They will simply use other tools that do what they need, in the way they want. In that sense, enforcing a particular approach within university-supported tools may not lead to better learning, but instead push students towards unsupported alternatives. This creates a gap between the learning behaviours we intend to encourage and the ones that actually take place.
From control to choice
Perhaps the goal, then, is not to control how students use AI, but to help them make informed choices about how and when to use it. There are different ways of working with AI, each of which can be valuable in the right context. Sometimes a student may need a direct explanation or example to build foundational understanding. At other times, a more guided, Socratic approach may help them work through a problem and develop deeper reasoning. There are also opportunities for reflective use, where students critique, compare, or improve AI-generated outputs. The important point is not deciding which of these is correct, but helping students understand the trade-offs between them.
For example, a student preparing for an exam might begin by asking AI for a clear explanation of a concept they do not understand. Once they have a basic grasp, they might switch to a more Socratic approach, asking the AI to test them with questions or guide them through applying the concept to a new scenario. Later, they might ask the AI to generate a response to an exam-style question and then critique it, identifying what is missing or how it could be improved.
If a student only uses a prompt–response approach, there is a risk they focus on producing answers rather than understanding the underlying concepts, effectively hedging their bets that something similar will appear in the exam. Equally, if they are limited to a purely Socratic approach, where they are continually prompted but never given a clear explanation, this can become frustrating and, for some students, increase anxiety rather than support learning.
In practice, effective use of AI is likely to involve moving between these approaches. Learning is not tied to a single way of using AI, but to knowing when to use each approach.
This has implications for how we design learning. Rather than embedding a single, “approved” way of using AI into our tools, we might design activities that require students to move between different approaches. We can ask them to explain how they have used AI and why, to reflect on what was helpful or limiting, and to identify where AI supported or hindered their understanding. In doing so, the focus shifts from controlling behaviour to developing judgement.
If we want students to develop strong thinking skills, we need to do more than shape the tools they use. We need to help them shape their own approach to using them. That includes creating space for different ways of engaging, recognising that not every task requires the same level of depth, and supporting students to make decisions about how they learn.
Socratic approaches to learning have real value, and AI gives us new ways to scale and support them. But if we enforce them as the only way of interacting, we risk undermining the very thing we are trying to develop. The aim should not be to decide how students are allowed to think, but to help them understand how to think, and when.