User feedback for the Raptor Application

Library staff who have been part of a group  assisting with the evaluation of Raptor were recently asked to complete a short survey on their experiences with the application. I am now analysing the results – that sounds rather grand doesn’t it? – well how about I am having a look at what they said? The test group were a small group but were picked from library staff who would likely to be using Raptor if it is adopted as a service and who would benefit from the  data Raptor can provide. From my own experience with Raptor I think it is obvious that useful data is available and useful charts can be produced either direct from within the application or by porting  data into  Excel. However I don’t think I am best placed to judge this as I have been working fairly regularly with the application for a few months now. On the other hand I didn’t think it would be a useful exercise to just sit someone in front of Raptor for the first time and ask them to give us an opinion as to how useful it is. So we  worked on a compromise.

At the end of last month I ran a short workshop for library staff, going over the main features and operation of Raptor and allowing time for a bit of experimentation and questions. Raptor is installed as a test service at Kent and once placed in the user group these library staff were able to access the application using their usual network username and password (LDAP) from anywhere they were working. I asked the test group to try and find time to experiment with the application over the following weeks and then sent out a link to an online survey asking questions on the user interface, customisation options and the usefulness of the  reports available. The survey featured some straight forward questions abut also asked the users to produce a graph showing authorisations to a specific resource over a specified period. They were then asked to  change the parameters and sort order for this graph and finally to export a pdf of the graph. The survey asked for comments as well as  ranked responses on ease of use and usefulness.

This was a very small test group so we should be wary of too strict an interpretation of the results. However in general the testers gave positive feedback to Raptor. Suggestions were made for improvements to the interface which will be fed back to the developers. At Kent, response times were not always great though this may not be the fault of the software.  More frustrating was the lack of ‘feedback’ during the period after an update had been requested and Raptor producing – or sometimes failing to produce – the requested graph. Users did not  know whether ‘anything was happening or not’.  When updates failed it was not always apparent that  this had happened as the previous version of the graph remianed visible and the Processing Status is not particualrly prominent. These issues are minor ones which I am sure  can be eradicated in future releases – Raptor is still in the early stages of development. Overwhelmingly the group considered Raptor to be a useful tool which would assist them in their work and planning.

 

Standard

Leave a Reply