The Hidden Work Behind Institutional AI Adoption

Silhouette of Robot assistant technology, industry 4.0.AI (Artificial Intelligence) concept

Universities are increasingly exploring what it might mean to provide institutional access to artificial intelligence. New tools are emerging rapidly, and many institutions are now experimenting with how these technologies might support learning, teaching and professional practice.

Recently, we announced that we will be rolling out ChatGPT Edu at the University of Kent. As we have worked through this project, one thing has become clear: even when a university provides access to a powerful AI platform, adoption is not simply a case of selecting a tool and switching it on.

What often looks like a straightforward technology rollout is usually the visible part of a much larger organisational effort. From the outside, the process can appear relatively simple: select a platform, negotiate a licence, announce access and begin supporting adoption.

In practice, selecting and procuring the tool, while far from straightforward, is only one piece of a much larger institutional effort.

The visible technology is only the surface layer of that work. Behind it sits a much broader set of activities spanning governance, infrastructure, capability building and ongoing operational support. Based on our recent experience preparing for this rollout, there are some useful lessons here for the sector.

Why AI rollout often looks easier than it is

When universities announce new AI provision, the most visible elements are usually the platform itself, the institutional communications, and the initial round of guidance for staff and students.

These moments matter. They signal that the institution is engaging seriously with AI and that staff and students will be supported to explore these technologies in structured ways.

However, announcements like these can also create the impression that the main work is complete once access has been enabled. In reality, these visible steps represent only a small part of what is required to adopt AI responsibly and sustainably across a large organisation.

For those involved in the implementation, the announcement often marks the point where the work becomes visible, rather than the point where the work begins.

The work behind the scenes

Behind any institutional AI rollout sits a significant amount of cross-functional activity. Much of this work takes place quietly in the background, but it is essential if institutions want to introduce AI in ways that are responsible, secure and sustainable.

Establishing the right governance structures is usually one of the first steps. AI adoption affects many different parts of a university, including education, research, IT, legal, procurement, data protection and communications. At Kent, this has meant establishing both a university-level policy group involving senior stakeholders from across the institution and a number of working groups focused on different aspects of implementation. These groups help coordinate activity, bring together expertise from across the university and ensure that both risks and opportunities are considered from multiple perspectives.

Institutional principles are another important part of this work. Alongside governance structures, universities need to develop clear institutional principles for the use of AI. At Kent, we have recently published our own institutional principles to help guide how these technologies are used across education, research and professional services. These principles help anchor decision-making in the values and priorities of the institution. Without them, it becomes much harder to maintain clarity about why the university is adopting AI and what outcomes it is trying to support.

This becomes particularly important given the pace at which AI technologies are evolving. Traditional policy development is necessarily careful and deliberative, but that can make it difficult for policy alone to keep up with rapidly changing technologies. Institutional principles, therefore, provide a stable foundation for decision-making, helping institutions stay aligned with their values even as specific tools and capabilities continue to evolve.

Procurement also plays a crucial role in this process. In many institutions, procurement processes can take time, particularly when dealing with emerging technologies where risks and requirements are still evolving. At Kent, our procurement colleagues have been instrumental in helping us navigate this process efficiently while ensuring that the appropriate diligence, governance and safeguards are in place. Their support has helped the university move at pace while maintaining robust processes.

Alongside governance and procurement, there is also a substantial amount of technical groundwork. Depending on the approach taken, this may include domain verification, identity and access management, workspace configuration, security review and early thinking about potential integrations with existing systems. When this work is done well, users rarely notice it. When it is rushed or under-resourced, the problems tend to surface quickly.

Organisational readiness is equally important. Providing access to an AI tool does not automatically translate into confident or effective use. Staff and students need guidance, opportunities to develop their understanding and space to explore what responsible use looks like within their own contexts.

For institutions, this means investing time in developing AI literacy, creating clear guidance and establishing support models that can respond to questions as new use cases emerge. Without this layer of capability building, uptake can be uneven and confidence can vary significantly across different parts of the university.

The work that begins after launch

Perhaps the most underestimated aspect of institutional AI adoption is what happens after launch.

AI platforms are evolving rapidly. New features appear frequently, and the ways in which staff and students use these tools continue to expand. As a result, institutional questions multiply and demand for support grows quickly.

At Kent, we have not yet rolled out access to the whole institution, so we expect to learn much more as this work progresses. However, experience from the preparation phase already makes one thing clear: in many ways, the real work of AI adoption begins once access is switched on.

Teams quickly find themselves updating guidance, reviewing new functionality, monitoring emerging risks and responding to an expanding range of requests from across the institution. What initially looks like a contained project soon becomes an ongoing institutional capability that requires sustained attention and coordination.

We will share further reflections as our rollout progresses.

What universities need to plan for

Recognising the hidden work behind AI adoption has important implications for institutional planning.

If this work is underestimated, timelines can quickly become compressed and small teams can become overstretched. Ownership can become fragmented across the institution, and the pressure to move quickly can sometimes outpace the development of appropriate governance and support structures.

There is also a broader strategic point. If AI adoption is treated primarily as a software procurement exercise, institutions risk underinvesting in the capability, governance and support that ultimately determine whether adoption succeeds.

A more realistic approach is to treat AI as part of core institutional infrastructure rather than as a short-term technology initiative. This means investing not only in platforms but also in capability building, planning for continuous iteration rather than one-off rollout, and involving cross-functional teams early in the process.

Most importantly, institutions need to keep educational value at the centre of these conversations. AI tools are powerful, but their value ultimately depends on how they are used to support learning, teaching, research and professional practice.

Final reflections

Artificial intelligence has significant potential to support higher education. However, realising that potential depends less on how quickly we can switch on access to new tools and more on how thoughtfully we build the institutional foundations around them.

As universities continue to invest in AI, it is important to remain clear about the underlying purpose. The goal is not simply to provide access to powerful tools, but to support staff and students in ways that enhance learning, strengthen professional practice and help prepare students for a world of work increasingly shaped by AI.

The universities that benefit most from AI are unlikely to be those that focus only on procurement speed, but those that invest steadily in governance, capability and educational purpose alongside the technology itself. Just as importantly, they remain guided by their institutional values and principles as these technologies continue to evolve.

Leave a Reply