Matt Bull and I braved the ice and snow to go up to London for the XCRI Programme Meeting last week. Only they didn’t really have any ice or snow in London. It was a well-attended event and for much of the day we split into small groups to explore possibilities for collaboration and mutual support. There was less overlap than might have been expected in terms of systems currently used for storing course data but overall participants seemed to be pretty relaxed and confident about producing the desired XCRI feeds.
As at Kent, most institutions involved – a mixture of FEIs and HEIs – were also intending to review current workflows for production of both web and paper based course information. It was more or less universally agreed that it was these work packages that might prove more problematical so it was good to hear people’s ideas and at least feel we were not alone. The aims of the project make a lot of sense to all our stakeholders – we all have the same goals – but the path we take to implement changes and the way we schedule the work will need careful planning and buy in from all those likely to be affected. We will end up with a more efficient, less fractured way of working and less disconnected data stores and duplication. But the day-to-day work of those involved in compiling, editing and publishing the data has to continue during the development process. The work of the project inevitably will have an impact on this – though we will strive to keep this to a minimum. It is vital that the communications channels stay open and that we design the work packages well. We are going to use an Agile methodology with a time-boxed iterative approach, regular reviews and a commitment to rapid and flexible response to change. It is early days in the XCRI project but I know from previous experience that work put into agreeing the way forward with stakeholders and the adoption of a robust working framework will pay dividends later.
With this in mind our first major task will be to create a detailed map of current data stores and data outputs and to analyse the processes that get the data from one to the other. Then we should know who we need to talk to as well as what we need to do.