Library Camp London (#libcampldn) update

Update

A quick update about Library Camp London (#libcampldn) as I have been asked many questions on Twitter and by email this week.

We released general tickets on the afternoon of Monday 10th December. I did this about 2 pm and the response was spectacular, in about an hour we’d ‘sold’ them all and were building up an Eventbrite waitlist. I released a further 30 tickets to clear the waitlist.

We now have 100 library campers registered including advance tickets we released earlier for library and information students.

Thank you for your interest in Library Camp London. The excitement and buzz on Twitter and offline, especially people wanting to be involved in organizing and talking about their session ideas has been wonderful. What next?

Waitlist

Although you’ve missed the first ticket release, you can register to join the waiting list for Library Camp London tickets.

If you are interested in attending Library Camp London, please join the waitlist. This is because when we release more tickets those already on the waitlist will be offered them first.

Venue

I am working with my employer (and Library Camp host) Senate House Library, University of London to make more space available for Library Camp London.

We want Library Camp London to be as inclusive and diverse as possible. In particular although we’re hosting the event at an academic library, it’s not focused on academic libraries or higher education. For this reason we’re making a case to make the event bigger.

Ticketing

There will be at least one further ticket release for Library Camp London. This is likely to be in January 2013.

Please watch for announcements from me (@preater) and the other organizers Gary (@ggnewed) and David (@davidclover) on twitter.

Links

Free and Open Source Software and distributed innovation

‘The battle of the library systems’

On 28 November Senate House closed early for the University of London Foundation Day,  our annual celebration of our grant of royal charter on 28 November 1836. As the library closed down I made the short journey to Cilip HQ to attend the “Battle of the Library Systems” event organized by Bic.

This event was a panel debate between two sets of speakers in favour of Open Source software (OSS) and proprietary software. I was speaking in the Open Source “blue corner” alongside Dave Parkes of Staffordshire University and Nick Dimant of PTFS Europe. Sadly, Mark Hughes of Swansea University was ill and unable to attend and speak as planned. In solidarity the open team shared between us presenting Mark’s slides.

The house motion was:

Open source is about distributed innovation and will become the dominant way of producing software.

This is a quote from Talis – slightly modified from the original – from the Jisc and Sconul LMS study (Sero Consulting, 2008).

I will say a little about the arguments I made in favour of the house motion and what I thought were some strengths and weaknesses with the proprietary team’s arguments. Mick Fortune summed up as a guest speaker and I’d recommend reading his blog post ‘BIC Battles – Open Source or Proprietary?‘.

My argument

I opened by explaining our situation. Senate House Libraries and the colleges that make up the Bloomsbury Colleges group recently made a decision in principle to select Kuali Open Library Environment (Ole) as our next library management system. We have chosen an OSS system which will be run on a shared services model by the University of London.

Why is Open Source software a good fit for higher education?

I explained I prefer the older term Free Software to Open Source as it’s conceptually broader. Thinking in terms of one dimension – software development with access to the source code – sidelines the underpinning philosophy of community, sharing, and respecting software users’ freedom.

The audience was mainly academic libraries and I argued that our industry, higher education, has a culture of sharing and collaboration and librarianship is collaborative as profession. The same point was made by the proprietary software panel, but I go further and argue this therefore makes the software a good fit for us.

Kuali Ole is a library services platform being developed collaboratively by universities and their software development partners specifically to meet the needs of academic research-focused libraries. Ole is an enterprise-level system that we intend to use for business-critical services within our consortium. It will be cloud-hosted and managed collaboratively to provide a stable and trustworthy service – about as far as you can get from the idea of a keen systems librarian installing a Linux distribution on an old server and deciding to ‘give an Open Source LMS a whirl’.

I argued the key differences with Ole are that:

  • It is a true library services platform rather than a traditional library management system
  • As it is collaboratively-developed OSS we have the possibility of developing the software to meet our needs

It is the Free or Open Source licensing that is important here, as it effectively provides a strong position of sharing by default to the development model used in the foundation. Effectively, sharing and collaboration is baked in to the product and the processes used to develop it.

Proprietary software suppliers and Open Source software

Among the strongest arguments in favour of OSS for library systems is the range and variety of OSS used by proprietary suppliers themselves. Examples are most prevalent in next-generation discovery engines where Apache Solr and Lucene are used extensively, but in other library systems Postgres, Apache Tomcat, and of course the Apache http server are used widely.

I think proprietary supplies use OSS because it represents best-of-breed software that is stable and well-supported, and importantly flexible and free to use. It is licensed in a way that allows development and may be used for any purpose. This is why I emphasised freedom or liberty initially: while proprietary software suppliers enjoy the benefits of OSS themselves they’re not so keen on passing those freedoms onto us, libraries that buy their software and support services.

Suppliers’ use of OSS was acknowledged by the proprietary team. Jim Burton of Axiell mentioned extensive use of OSS throughout the company with an estimate of something like 500 different pieces used in their processes – though I expect this includes things like development tools and that the amount of OSS in their finished products is much less.

It is difficult for software suppliers selling systems based on Open Source to argue against Open Source. In using it in your own products you are vouching for it – and also undermining your arguments against it. For me this ubiquity in use and development is a compelling argument in favour of Open Source becoming the dominant way of producing software in the future.

Choosing software pragmatically

Jim made what I felt was the best argument against OSS for a complete library system directly relevant to my own experience in higher education. That was that the license is of secondary concern if the software does what you want and meets your needs. That software has an Open Source license doesn’t mean it’ll be a good fit for a given specification – relevant in a software ecosystem with relatively few complete OSS library systems as options.

I take from this that in practice our assessment process should lead us to choose pragmatically based on need rather than buying something because ‘it has a badge’. For many libraries that choice would mean proprietary software as best fit to a specification: perhaps an LMS with open and standards-compliant APIs allowing development work, perhaps cloud-hosted, perhaps with developer communities, perhaps itself built from OSS?

Distributed innovation

I argued as a software development method Open Source and open collaborative development methods make sense in our increasingly complex and networked world. I borrowed a term from David Weinberger here, that that nowadays knowledge has become “too big to know” (Weinberger, 2012) particularly evident in higher education with the complexity and sheer scale of research data.

It is a distributed and networked development approach that has created successful projects such as the Debian GNU/Linux distribution, and indeed the Linux kernel itself. One reason for the success of these projects is networked expertise: the ability to surface skills and knowledge from a globally-distributed community of developers. To apply this to library systems software I argued suppliers building systems based on a closed approach cannot respond to our changing needs as one based on networked expertise with ‘peaks’ of local knowledge that best understand our own situation and requirements can do.

The proprietary team emphasized software suppliers’ wish to listen to their customers. I don’t doubt their honesty in this at all. I think engaging customers and encouraging more open development such as developer communities very welcome. However, I argue any single vendor lacks the depth and breadth of knowledge that we have collectively in our own institutions and the scale that can be brought to bear by networked collaborative development. For this reason, the future is necessarily an open one.

References

Weinberger, D. (2012) Too big to know. New York, NY: Basic Books.

Sero Consulting Ltd (2008). JISC & SCONUL library management systems study. Available at: http://www.sconul.ac.uk/news/lms_report/lmsstudy/ (Accessed: 6 January 2022).

Selecting a Free and Open Source LMS – Kuali OLE at the University of London

Senate House Libraries, and the University of London colleges that make up the Bloomsbury Colleges group recently made a decision in principle to select Kuali OLE as our next library management system (LMS). This project is currently termed the Bloomsbury LMS, or BLMS.

I will blog throughout this project but for now I wanted to explain a little more about this from a strategic point of view from the perspective of the systems librarian at the central University of London.

Significance

For me the significance of the decision in principle is that:

  1. We have chosen a Free / Open Source Software (FOSS) system.
  2. The system will be run on a shared services model by the University of London.
  3. Kuali OLE is a next-generation library management system.

By next-generation I mean Kuali is one of the new breed of systems that are cloud-hosted, based on modern Web services, and engage better with online electronic content. This is the type of system Marshall Breeding terms a library services platform as distinct from the traditional LMS.

Choosing an Open Source system

Strategically it’s important to us that our next system will not be more of the same both in terms of the conceptual approach to the LMS as a platform and in terms of being developed openly and collaboratively.

Discussing cultural issues, I have argued previously at Library Camp 2012 that in higher education (HE) the core values of Free Software – the Four Freedoms (Free Software Foundation, 2012) – are in alignment with our professional culture in HE and that within and outside HE librarians are particularly collaborative profession. Kuali OLE is an LMS designed by and for academic and research libraries for managing their library services. I do think that’s significant, but I think what OLE offers goes far beyond the software license being a philosophical best fit for our culture in higher education and libraries.

OLE is an enterprise-level software package that will match the functionality of closed-source library systems and exists as part of a suite of enterprise software including among others a financial system and student record system. This is interesting for us as a potential solution to the huge problems we have in higher education of ‘siloedness’ of our campus-wide enterprise information systems and the difficulties we have making them talk to each other.

Importantly, OLE is a true library services platform rather than just being a traditional LMS. Virtually any LMS currently on the market can tick all the boxes on the venerable UK LMS core specification, but there are relatively few that move beyond this and also open up the possibility developing the software to meet our needs. It is FOSS licensing that effectively provides a strong position of sharing by default to the development model, on the other hand the Kuali governance model addresses traditional weaknesses in FOSS projects and guarantees the production quality of the finished system. Those most relevant to us at the University of London are quality and risk management, and providing a robust testing methodology.

Contact

This post is meant as a brief introduction from my own perspective. If you are interested in the BLMS project and wish to contact us please see the BLMS website.

References

Free Software Foundation (2012) ‘What is Free Software?’ Available at: http://www.gnu.org/philosophy/free-sw.html

Free and Open Source software and cultural change, at Library Camp 2012.

Session underway, participants Tweeting hard. Photograph © Sasha Taylor, used with permission.

On Saturday 13th October I attended the ‘big’ Library Camp 2012 unconference (libcampuk12) at the Signing Tree Conference Centre, Birmingham.

Liz Jolly and I pitched a session on the use of Free and Open Source software in libraries, with a particular focus on discussing the cultural changes or cultural shift needed to develop and sustain the use of in libraries, a typically risk-averse environment. This idea came out of a #uklibchat discussion on Open Source software back in July – thanks to Adrienne Cooper for organizing that.

This session was prepared and facilitated jointly. However when I write “I”, “me”, etc. below I am talking about my own views and experience.

Introduction

In the session asked we use Open Source and Free Software as interchangeable terms that are close enough in meaning that Library Campers could use either term. I realize, and accept, there are objections to doing this. I will refer to FOSS meaning “free and open source software” below.

I explained that Open Source is a pragmatic model of software development where you are allowed access to the source code of the software, however it – and moreover the older concept of Free Software – are underpinned by a philosophy based around respecting users’ freedom and fostering community. Drawing on this we wanted to open with the “four freedoms” in the Free Software Definition (Free Software Foundation, 2012) and how they tie into our professional culture. This list is written by a computer scientist, so famously it starts from zero!

  1. The freedom to run the program, for any purpose (freedom 0).
  2. The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).
  3. The freedom to redistribute copies so you can help your neighbour (freedom 2).
  4. The freedom to distribute copies of your modified versions to others (freedom 3).

We argued that in higher education and librarianship in particular, these freedoms are broadly aligned to our own professional culture. Universities have a culture of sharing both internally and externally, and also between those working in the same disciplines across institutions. Furthermore, both within and without higher education, librarianship is a particularly collaborative profession.

However, in the broader cultures of higher education we face various problems. In some ways the Four Freedoms are in opposition to the broader organizational culture we work in. We identify points of tension for universities and libraries as collaborative organizations working within power structures that do not necessarily agree with or support a collaborative approach. This is especially the case in our current political and financial climate, where increased competition between institutions will to an extent mitigate against a collaborative culture.

We wondered if perhaps this is mainly a problem within perceived “competitor” institutions, I asked if anyone finds themselves discussing things more openly with colleagues in sectors or institutions that you don’t consider a “threat” or competitor to your own?

FOSS and the culture of libraries and education

Culturally, one starting point is looking at where we still find institutional resistance to FOSS. By this I mean beyond myths like FOSS implying that you have to “build it yourself”, or that “you need to employ programmers”, rather I mean resistance to FOSS as a concept itself. I have seen some of this in my career in further and higher education, but I would say nowadays I think this attitude is dying off. Personally I find myself anticipating resistance to FOSS that simply doesn’t materialize – or in many cases I actually find enthusiastic approval for FOSS.

I am sure our experiences here vary widely – certainly buy-in from senior managers is essential and having one particularly pro- or anti-FOSS manager can make a huge difference either way. Several participants contributed here with examples from their own public sector experience where projects already in development had been scuppered when they were found to be using FOSS, and explained further that they did still spend time knocking down some very old-fashioned arguments about FOSS versus closed source such as needing to “have someone to sue when it all goes wrong”.

There was general agreement that certain sectors are worse at this than others, with libraries in local government and the NHS picked out as particularly difficult: public libraries having to accept whatever systems their authority decides on with limited or no change, and the NHS wanting to play especially safe.

One contradiction in higher education is we have a very long history of using FOSS for the services that underpin our systems (the concept of Free Software was born in higher education, when Richard Stallman was at MIT (Stallman, 2010) but a reluctance to actually use FOSS for campus-wide and departmental systems. What do we mean by this? At a basic level FOSS gives us the building blocks such as web and database servers, programming and scripting languages that we need to create software and services. Few of our IT and systems colleague would object to for example using a FOSS Web server or content management system – but notice how few FOSS library management systems are deployed in the UK, for example.

As a cultural aspect of this we would ask if library and education managers have enough in-depth knowledge of principles of technology, including FOSS, and how it can benefit their organisation to successfully govern projects and to engage with wider community? In universities there is an approach to promoting managers on academic excellence rather than strategic management ability, but these would be the people chairing project boards.

One example here is Moodle, a FOSS virtual learning environment – some argue that while the use of Moodle in higher education is growing, there is a relative lack of engagement with the community – possibly because of the aspect of knowledge culture in higher education of a fear of “exposure”, of not knowing? Oddly, we note that universities can prove not the best learning communities as we don’t like to admit when don’t know things! We also noted at a higher level a culture of “not invented here” exists in UK higher education (most obviously in nationally-funded projects) where we fail to learn from what others have done elsewhere. Or worse in some cases actively dismiss experience elsewhere because it is not our own idea.

How we buy software, and the “library mindset”

At this point I apologized to my fellow Library Campers for I was going to talk about… project management.

I argue the prevailing approach to software procurement and management in libraries works against FOSS. By this, I mean the approach to procurement or ‘invitation to tender’ that includes implicit assumptions that we are purchasing products from a software supplier or “vendor”. That said, we can actually specify and purchase FOSS in this way – what we are doing is buying the same support from a vendor but the product itself is FOSS. In the public sector, that support might require a tendering process over a certain threshold amount. Luke O’Sullivan pointed out here there is a procurement framework for purchasing FOSS systems available at the LibTechRFP wiki.

We noted that very few actually do this. A recent example is Staffordshire University where Dave Parkes and colleagues worked hard to research and justify choosing the Koha Open Source ILS, supported by PTFS Europe (Johnson, 2010). From a systems point of view it’s notable that Koha is quite a traditional LMS, and can go up against other similar systems using the full UK LMS Core Specification.

I would argue systems like the LMS and resource discovery are really about enterprise information, by this we mean they are among our key systems enabling learning and teaching, research, and other business activities in our universities. These systems are therefore business critical and should be viewed as such. However in universities this typically has never been the case. The LMS tends to be seen as a system that is “just there”, in the library – something that doesn’t need too much attention from IT or the broader university.

This ties in with an approach to user acceptance and testing that does not really exist in higher education, but should as the risks are that spreading around bad data between library and other systems in your university can cost you real money. We argued that librarians should look at software projects from a viewpoint of a “testing mentality”: what is it doing? What effect does it have on other parts of the system and on our other data? Librarians as information professionals should have a role to play here. This is not technical, but about information. More broadly Kate Lomax mentioned there’s a lot you can do to contribute without being a developer or a techie – for example documentation.

I argue these points about how we’ve viewed our previous systems and how we  procure them has created something of a “library mindset” in our culture. I feel that as library workers we’ve been complicit in this, and worse in library systems and IT we often take the safe option which can limit our outlook and willingness to risk new things. This is even while we’re very happy using FOSS on own our own computers, or as some participants mentioned “sneaking in” FOSS programs behind the back of unwilling IT departments.

What changes everything in our view are FOSS products in library management systems, discovery, finance, student management, and virtual learning environments that are now becoming mature and mainstream.

Several mainstream examples are:

Conclusion

As a kind of coda we explained that issues around governance, testing methodology, documentation, change management and so on applies to so-called closed-source software just as much as it does to FOSS, and we’d say good project management and software development practice applies regardless of development model use.

As a FOSS developer, Luke emphasized the importance of governance, testing and providing a stable service alongside development. He explained that FOSS is incredibly exciting because you can work with the source code to make changes to suit your local needs – but you risk getting totally carried away. Culturally this represents a real change for library workers not used to this flexibility, so there’s a danger of too much demand on programming time if the assumption is anything about the system can be altered to meet local needs.

The strategic issues here for FOSS projects are around effective management in terms of inclusivity, collaboration and transparency, project governance frameworks, quality and risk management, procurement policies, and change management. These are not specific to an FOSS approach but we argue, essential for such an approach to be successful and specifically to address the traditional weaknesses found in FOSS projects.

Acknowledgement

My thanks to Sharon Penfold, Project Manager at the Bloomsbury LMS for helpful discussion on this subject around procurement, data, testing, and project management.

References

Free Software Foundation (2012) ‘What is Free Software?’ Available at: http://www.gnu.org/philosophy/free-sw.html

Johnson, P. (2010) ‘Staffordshire University chooses Koha for its new library system’. Available at: https://web.archive.org/web/20160320053630/http://blogs.staffs.ac.uk/informationlandscape/2010/12/10/staffordshire-university-chooses-koha-for-its-new-library-system/

Stallman, R.M. (2010) ‘The GNU project’. Available at: http://www.gnu.org/gnu/thegnuproject.html

How to root your HTC Desire and install Android 2.3 (Gingerbread)

Step-by-step guide

There are various reasons to do this, including:

  • Needing root to run certain apps. This includes taking backups, removing unwanted pre-installed apps, and installing a fix for the tiny (140 MB) internal storage on the HTC Desire.
  • Wanting an upgrade to Android 2.3 or higher. This is not available officially.

Isn’t this explained elsewhere online? Yes, but many guides are out-of-date or include unhelpful advice. I wasn’t able to find everything I needed in one place.

This post is 80% there: http://androidforums.com/desire-all-things-root/439627-guide-revolutionary-s-off-rooting.html

This will wipe your phone including apps and data.

Preparation

As well as an HTC Desire you need a Windows PC connected to the Internet and a micro-USB cable for your phone.

The custom ROM process is a bit quicker if you have a USB reader for your micro SD card.

Drivers

On your PC:

  • Download and install HTC Sync. Then uninstall the HTC Sync software itself, leaving the HTC drivers in place.
  • Download and install the HTC fastboot drivers mentioned on the Revolutionary documentation.

Is it necessary to install the Android SDK for this process – using adb shell? I don’t think so. Described on this post.

On your phone:

  • Enable USB debugging.
  • Enable installing non-Market apps.

Root and custom recovery

This is a two step process: we use Revolutionary to gain S-OFF on the phone, then flashing a root zip file which gains you root. Revolutionary does not get you root by itself.

HBOOT version / erase size of 20,000 or 40,000. As we’re using Revolutionary and ClockWorkMod (CWM) does this apply? I think not.

From boot screen:

Check if PVT version is PVT4. If so check erase size.

Install Terminal Emulator from Market, then

cat /proc/mtd

Check if 20000 or 40000. If 20000 or not PVT4, we’ll use ClockWorkMod.

Check HBOOT version, this must be lower than 1.06.

On your PC:

  • Turn off your anti-virus software temporarily.
  • Download Revolutionary from http://revolutionary.io and the custom root zip file mentioned on their documentation page.
  • Connect the phone. Put the custom root zip file on the SD card.
  • Start Revolutionary. It will guide you through the process.
  • You need to get a beta key for Revolutionary – the program gives the details about your phone to generate one on the Revolutionary site.
  • The phone will restart a few times and Revolutionary will update you on progress. It’s automatic, but keep an eye on it.
  • As a final step Revolutionary asks if you want to install a custom recovery – CWM. Say ‘yes’ to this.

You now have S-OFF on the phone.

On the phone:

  • Switch off the phone, and boot into HBOOT by holding ‘volume down’ and pressing the power button.
  • The boot screen should say -REVOLUTIONARY- at the top and mention S-OFF in the next line.
  • Choose Recovery from the menu.
  • Select ‘Install zip from sdcard’ then ‘Choose zip from sdcard’.
  • Choose your root zip file, should be named Superuser-3.0.7-efgh-signed.zip.
  • Power off the phone from the CWM menu.

You now have an HTC desire with root and a custom recovery.

Custom ROM

We will use CWM to partition the SD card then install a custom ROM.

There are some dire warnings online about not partitioning an SD card with CWM, these apply to old versions of CWM and can be ignored with the current version.

This process will wipe the SD card. Copy the contents of it onto your computer first if you want to keep them.

Partition the SD card

Goal is to provide two partitions on your phone:

  • An ext4-formatted partition of 1 or 2 GB on your SD card that the phone will use as internal storage.
  • The rest of the card as a fat32-formatted partition the phone will use as regular SD card storage.

The only complication here is the ext4 partition must come second on the card, the fat32 partition must come first. CWM will handle this for us.

On the phone:

  • Boot into HBOOT by powering off and holding ‘volume down’ and the power button.
  • Go into Recovery → Advanced → Partition SD card.
  • Set the sd-ext partition to be 1 GB.
  • You don’t need a swap partition but setting one of 32 MB will not hurt anything.
  • The partitioning and formatting will take a while. Leave it be.
  • When this is done power off the phone

Installing a custom ROM

Take out the SD card and connect to your Windows PC.

Download your custom ROM of choice. There are many available for the Desire so it depends on what features you want.

I am using the SuperNova ROM which is very good. It provides Gingerbread and HTC Sense and a fairly ‘stock’ experience. Importantly, The ROM has a stable Data2SD installed that provides ample internal storage using your SD card.

Download the latest SuperNova and put it on your SD card. SuperNova instructions are here.

Quick version:

  • Shut down your phone and reboot to HBOOT.

Select Recovery, then:

  • Wipe data/factory reset
  • Install zip from sdcard → choose zip from sdcard → pick the SuperNova file.

Question: is it required to wipe cache and/or dalvik cache? Think it doesn’t hurt at the first install of SuperNova but should never be done when upgrading.

SuperNova is a big file so this process takes a while. It provides updates as it goes. Once it’s finished power off.

At first boot:

  • Allow the phone to boot, then answer all the questions asked except skip the Google account setup. Check About → Software shows the build as SuperNova.
  • Don’t restore data or apps yet. Reboot the phone.
  • At this point Data2SD runs during boot. When the phone has finished booting up, check the internal storage – it should be 900+ MB.

New Radio

Not the Bikini Kill song. The radio is software on the phone that allows you to make phone calls and use mobile Internet.

There seem to be incompatibilities between different Android releases, different custom ROMs, and different radio versions.

Radios are available from Mo Firouz’s site.

Best approach is to leave your radio alone and check if everything works with your new ROM.

  • If it does not, install the most recent radio release. SuperNova recommend you use this – version 5.17.05.23.
  • If this does not work then downgrade to version 5.09.05.30 recommended by Mo Firouz. This worked for me on Gingerbread when my current radio and the newest radio didn’t work.

Flashing the radio

This is complex. The radio as you download it cannot be loaded directly using CWM as they are only distributed in an old and incompatible format. Ignore the steps on the download page above.

You can flash the radio using fastboot. This is a hassle – there must be a better way, surely?

Guide to ADB and Fastboot for Windows.

On the PC:

  • First install the Android SDK. This will take a while as the downloads are large files.
  • Download and install the HTC fastboot drivers mentioned in the Revolutionary documentation.
  • Download your radio file and extract the radio.img file from it.
  • Reboot to HBOOT then select Fastboot from the menu.
  • Connect the phone using USB. Your phone screen should update to say ‘FASTBOOT USB’.
  • At this point you can issue commands using the ADB shell on your Windows computer.
  • Open a shell in Windows using cmd.exe. Navigate to the folder containing your radio.img file.
  • Note: online guides mention setting the system PATH variable. Don’t worry about this, you can just run the fastboot program directly.
  • Find fastboot.exe – it will be somewhere in the Android SDK folder. On my 64-bit Windows 7 machine, it’s:
c:\Program Files (x86)\Android\android-sdk\platform-tools\fastboot.exe
  • Flash the radio using this fastboot command:
c:\Program Files (x86)\Android\android-sdk\platform-tools\fastboot.exe flash radio radio.img
  • Finally reboot the phone:
c:\Program Files (x86)\Android\android-sdk\platform-tools\fastboot.exe reboot

Photo credit

HTC Desire‘ by Flickr user Matthias Penke, license CC BY-NC-ND.

How we added WebBridge / Pathfinder Pro links to our Encore catalogue

When we launched our new catalogue, Encore from Innovative Interfaces Inc., in June 2011 among the first problems identified by staff and library members was that it did not have a way to request journals from our closed stores (we call this the Stack Service which includes our tower and an offsite store in Egham, Surrey).

This missing functionality between the old and new catalogue was a major barrier to buy-in. Not having it meant staff had to explain three parallel systems for requesting just one type of material from the store. Our readers want to request store material online quickly and efficiently, not have to deal with navigating between two different catalogues.

A new release of the Encore catalogue software has enabled us to rectify this and in this post I’ll explain how. The third request system was paper forms, in case you’re wondering…

Link from the Encore record to request a store journal, based on Pathfinder Pro data.

Requesting journals

Historical note: Pathfinder Pro used to be part of Innovative’s WebBridge product, which included both an OpenURL link resolver for linking in, and software for presenting context-sensitive links on your record display. Systems librarians at Innovative sites often refer to both products as “WebBridge”.

Requesting journals from a closed store is problematic for an Innovative library if you are cataloguing journals in a normal way – using a holdings or checkin record to detail what you have, rather than itemising each individual journal volume on its own item record.

In our old WebPAC catalogue I had devised a way of using Pathfinder Pro to link out to a Web form that would send some bibliographic data to pre-populate a form. It’s easy to link out to a form from the record (put a link in the MARC 856 for a quick solution) but reusing the record metadata itself helps readers to not introduce errors.

In my opinion Pathfinder Pro is a good product – the tests you can apply are quite powerful including matching parts of your record based on regular expressions and the like.

The basic principle to enable this linking is:

  1. Check to see if the holdings record is in a store location. Egoist : an individualist review is coded ‘upr’, for example, and a data test in Pathfinder Pro is used see if the holdings record location field equals ‘upr’.
  2. If the record is a store location, build a link based on selecting the journal title, classmark, and the system record number from the bibliographic record. What is actually rendered on the page is a link using the same icon as we use for other types of online requesting:
Link from the WebPAC record to request a store journal, using Pathfinder Pro (circles in orange).

This generates a link to our online request form:

http://www.ulrls.lon.ac.uk/stackrequest/requestform.aspx?JtitleText=Egoist%20%3A%20an%20individualist%20review.&JlocationText=STACK%20SERVICE&JclassmarkText=PR%20Z&JbibText=b1746208a

Which leads to a neatly pre-populated form:

The problem with Encore

The thing that prevents this working in the new catalogue is Encore lacks the Pathfinder Pro “Bib Table” which allows you to place links on the record display itself. In the screenshot above the Bib Table is the space on the record that contains three buttons, including the store request button circled in orange.

This is a problem as many Innovative sites have built services around this feature of Pathfinder Pro that include placing a link prominently on the bibliographic record.

Towards a solution for Encore

The latest Encore release 4.2 allows you to customise the Encore record display by including your own JavaScript. My presentation on this from the European Innovative Users Group conference 2012 with further examples is available:

I decided there simply had to be a way to insert a link to the store request form into the page using JavaScript…

  • Our first thought was using JavaScript to check if the record is in the store, then building a link to the request from by plucking bits of metadata from the page. This was a non-starter as the structure of the rendered Encore page is not semantically sound enough to work with in this way.
  • Second thought was to use an Ajax call to scrape the record display of the classic WebPAC which would be easier to work with. This isn’t possible as Encore and the WebPAC run on different Web servers so you run into the same origin policy. And no, you can’t set up a proxy server on your Encore server to work around it.
  • Third thought was using a Web application with a dedicated screen-scraping library that that could pull the metadata from the classic WebPAC or Encore. We’d link to this from the Encore record display and allow it to direct the reader’s browser to the populated request form. This is almost what we implemented. Read on…

How to do it

Building a link out from the Encore catalogue is simple. The JavaScript for the Encore record is available as a Pastebin for easier reading.

What this will do:

  1. Get the system record number. The simplest way I’ve found to do this from the page itself is use the document.URL which contains the record number.
  2. Check to see if a div exists with ID toggleAnyComponent. Not that you’d know it from the name, but this div is rendered only if there is an attached holdings / checkin record which means we’re dealing with a journal record.
  3. If if exists, check to see if the div matches a regular expression for the string “STACK SERVICE”.
  4. If it does match, create a link out to our Web application and append it to the existing customTop div.

This link out appears in this form:

http://www.ulrls.lon.ac.uk/stackrequest/parse.aspx?record=1746208

1746208 is the system record number for Egoist, minus the leading ‘b’ and trailing check digit.

Web application for screen-scraping

The real work is carried out using a Web application written in ASP.NET by my team member Steven Baker. Steve used the Html Agility Pack which is a library for .NET ideal for scraping Web pages. Of course, you can use your favourite language to accomplish the same thing.

Scraping from the Encore or WebPAC record display is a complicated business and how our library has named our various locations and classmarks was not helping.

So instead of scraping the page directly and including lots of different tests to deal with the various oddities found in the markup, it’s much simpler to scrape the WebPAC record display and pick out the div containing the link rendered by Pathfinder Pro.

This link already has the metadata required for the store request form so it’s then just a matter of using this URL to send the Web browser on their way to the request form.

The first step is to load the classic WebPAC page using Html Agility Pack. The link from Encore provides the system record number to generate a link to the WebPAC screen in this form:

http://catalogue.ulrls.lon.ac.uk/record=b1746208

The Pathfinder Pro wblinkdisplay div from the classic WebPAC looks like this for the example of Egoist : an individualist review:

<div class="wblinkdisplay">
<form name="from_stack_service159_form">
<a href="" onClick="javascript:loadInNewWindow('/webbridge~S24*eng/showresource?resurl=http%3A%2F%2Fwww.ulrls.lon.ac.uk%2Fstackrequest%2Frequestform.aspx%3FJtitleText%3DEgoist%2B%253A%2Ban%2Bindividualist%2Breview.%26JlocationText%3DSTACK%2520SERVICE%26JclassmarkText%3DPR%2BZ%26JbibText%3Db1746208a&linkid=0&noframe=1');return false;"><img src="/webbridge/image/request.gif" border="0"></img></a>
</form>
</div>

You can see the URL Pathfinder Pro normally redirects the Web brower to. The next steps are:

  • Select the div with class wblinkdisplay.
  • Cut out the URL based on using “resurl=” and “‘)” at the start and end of the URL to get the indexes needed.
  • URL-decode the resulting string.
  • Redirect the Web browser to that URL.

In live use, this is so fast that the end user doesn’t realise anything is happening in between clicking the “Request from Store” button in Encore and getting to the request form itself.

Comments on this approach

There are several advantages to handling Pathfinder Pro and Encore integration this way:

  • The work to pick out metadata from the record has already been done in Pathfinder Pro and doesn’t need re-implementing.
  • This will be easy to extend to other store journals, or if journals move from open access shelves to the store. We only need set up a new location in Pathfinder Pro and it’s reflected in Encore as well. We’d have to do this anyway to enable online requests in the classic WebPAC.
  • This approach isn’t specific to journals and could be used to put any links generated by Pathfinder Pro into Encore.

However, this isn’t a complete solution as it doesn’t just give you the Pathfinder Pro Bib Table in Encore. This is what we would really like and what we have asked our software supplier for. That said, if you can pick something unique out of the Encore record to test for then you can link out from Encore in a way that replicates the behaviour of the linking in classic WebPAC with Pathfinder Pro.

Please do contact me with any questions or your thoughts, or leave a comment below.

The anti-social catalogue – at Library Camp Leeds

On Saturday 26th May I attended Library Camp Leeds (libcampLS), a regional library unconference hosted by Leeds City Libraries. The conference took place on a beautiful sunny day at Horsforth library.

In a masterful move by the organizers we decamped to nearby Hall Park for the afternoon sessions which meant the session I had pitched on library catalogues took place ‘en plein air’. The unconference style made this easy to accomodate though there were some downsides, notably a dog that turned up and dug into Dace‘s salty cheese sticks just as the session was getting started…

Dog joining in with ‘cake camp’, photographed by Dace Udre, license CC-BY-NC.

The anti-social catalogue

Session underway, photographed by Kev Campbell-Wright, license CC-BY-NC-SA.

What is the next-gen library catalogue?

I opened by outlining what we mean by a “discovery interface” or “next-generation library catalogue” to give us some grounding. Then I gave a quick outline of the failure of current library systems to be “social”, that is, how they don’t facilitate social interactions.

I paraphrased from Sharon Yang and Melissa Hoffman’s article (2011) surveying library catalogues. I’ll repeat this below as I know it’ll come in handy in future. What makes something a next-generation catalogue isn’t very well-defined but we can say such a system will have many of these features, whereas traditional catalogues have few:

  • They provide a single point of searching across multiple library resources including the local bibliographic database, journal articles, and other materials.
  • The Web interface is modern and its design reflects that that found in Web search and ecommerce sites rather than traditional bibliographic retrieval systems.
  • They favour keyword searching via a single search box.
  • They feature faceted navigation to rework or limit search results.
  • They are tolerant of user error and provide “Did you mean…?” suggestions.
  • They feature enriched content drawn from sources outside the library such as book jackets, reviews, and summaries.
  • They feature user-generated content such as reviews and tagging.
  • They feature recommendations or suggestions for related material, which may be based on information held in the library system (e.g. circulation data) or elsewhere.
  • They feature some kind of social networking integration to allow for easier sharing and reuse of library records and data on these Web sites.
  • To facilitate this sharing, records have stable persistent links or permalinks.

What are the problems?

Some of the features mentioned above are social in nature, including user-generated content such as tagging and reviews, recommenders built from using circulation data, and integration of social networking sites. So “next-generation” implies a suite of features that include some social features, but not everything next-generation is such a social feature. Furthermore the underlying library management system and metadata are not likely to be too supportive of these features.

In practise social features like tagging and reviews haven’t really taken off in libraries and those of us using these tend to find low use among our customers. This is certainly my experience with tagging, enabled on our Encore catalogue at Senate House Libraries. It is not enough to have a reasonably large bibliographic database and a reasonably large membership then turn on tagging and expect something – the magic – to happen.

I do not think library catalogues are perceived as a social destination by our readers. However I think what prevents this is not that there is no wish by readers to interact in this way using our systems, but that we’re only just starting to make a serious effort to build features that encourage genuine social interaction.

This is what I mean by current catalogues being anti-social. However, I did like this alternative definition from Gaz:

Discussion

Note: attributions below are based on my notes from the day. If I’ve made a mistake please let me know.

The conversation was lively and varied and I was really pleased to facilitate a session where so many present wanted to contribute.

There was a general feeling the current technology isn’t there yet and implementation of social features on our catalogues do not encourage social interaction.

Luke explained catalogues built by vendors reflect the small marketplace offered by libraries and that technology in libraries tends to be quite far behind leading edge. He described the development of VuFind for discovery based on frustration with software supplier offerings – but one that required a willingness to invest in staff resource to develop and implement VuFind. This was done at Swansea University, Swansea Metropolitan University, and Trinity Saint David as a project – SWWHEP.

Luke mentioned something I have heard as a common objection to user-generated content in catalogues, the fear that students will abuse it and tag books with swearwords and so on. There was a similar concern raised that books written by academic staff might be rated down by students (with a cheeky suggestion added – “They should write better books”). Luke pointed out this has not proved a problem on the Swansea iFind implementation of VuFind (as it hasn’t at Senate House Libraries) because the feature is simply not being used. I thought that in some ways the feature being ignored is worse than readers actively disliking it…

Sarah gave an example of a ‘paper-based Web 2.0’ (my term) implementation where library members were given a paper slip to rate or review an item – which would then be keyed into the catalogue by staff!

Several campers made the point bringing in user-generated content from outside – such as Librarything for Libraries – could make a big difference as then there’s clearly something there to start with.

It was generally agreed building features that create good social interaction requires effort, it’s not something we can easily bolt on to existing systems that aren’t designed for this from the ground up.

There was agreement with Iman‘s point that for social features to become popular there should be an incentive for the customer. The customer should get value from the interaction, or what’s the point of doing it? Alongside this it shouldn’t take huge effort or require a great deal of work to be social. The concept of gamification as a way of providing that incentive was raised here.

Several campers gave example of where libraries know great a deal of information about our readers habits and actions, and could re-use this to enhance their experience of the physical or online library. The approach to social features on the catalogue that requires least effort are those interactions that happen by you doing what you would normally do anyway. For example borrowing and returning books to generate recommendations based on circulation information.

One problem was raised about emphasising top loaning items from the collection in that this could become self-sustaining: an item remaining popular because it is on that list. (At this point I wondered that I probably couldn’t make our top-loaning author Michel Foucault any more popular if I tried…)

Liz made a thoughtful point that the use of technology is important, that is how it enables us to fulfil the mission of the organization (the library, the university). We should concentrate on what’s relevant for our organizations. So: we need to be clear what we’re trying to achieve with these features and what the point of it all is. Technology used poorly for its own sake had already been raised, an example given being linking to an ebook record from the catalogue using a QR code: if you’re already online looking at the catalogue, why not just a normal hyperlink?

Rather than limiting ourselves to what other libraries are doing we should be thinking along the lines of features employed in ecommerce systems. Spencer made the interesting point that ecommerce systems he has worked with can build a much more complete picture of user needs and wishes with a view to offering them a tailored online experience. This is years ahead of anything libraries currently do.

Some more fundamental problems were raised about technology and libraries.

Linsey raised the idea of ’embarrassing IT’, that is IT provision that’s so bad we as information professionals are ashamed to offer it. Alison said the technology needs to be there to support new catalogues, or our staff and customers simply can’t make the best use of them. An example given by the group was of an older catalogue remaining popular versus a next-generation system because it’s faster to use on outdated computers provided by the library.

These problems aren’t minor. Feedback from the group was that our Web presence and user experience of our Web sites really influences users’ perception of our organizations. There’s a real need for us to do this well, not half-heartedly.

Acknowledgment

My thanks to Natalie Pollecutt at the Wellcome Library for helpful discussion about the concept of the ‘social catalogue’ ahead of libcampLS.

References

Yang, S.Q. and Hofmann, M.A. (2011). ‘Next generation or current generation?: a study of the OPACs of 260 academic libraries in the USA and Canada’, Library Hi Tech, 29 (2), pp. 266-300. doi:10.1108/07378831111138170

The Mnemosyne-Atlas: adding Pinterest to the library catalogue.

Why pinterest?

Last week I attended a talk by Phil Bradley at the Cilip in London AGM (a podcast of this talk Around the World on a Library Degree is available). Phil pointed out Pinterest as a particularly useful and interesting site to watch. I had not heard of this before so registered an account. Shortly after I noticed the Pinterest implementation at Darien Library.

Pinterest is a social networking site for sharing photos. Users organise items of media on boards – typically thematically or for a particular event.

I was immediately struck by the appearance of a full pinboard, it made me think of Aby Warburg’s Mnemosyne-Atlas. The Mnemosyne-Atlas was Warburg’s unfinished work, a series of plates (or boards) showing images from the classical period to Warburg’s present time. Alongside classical and renaissance images it included photographs, maps, woodcuts, advertisements, fragments of text, posters, and so on – all kinds of visual media. Warburg intended the boards to be accompanied by commentaries, but these were incomplete on his death in 1929 and only fragments exist.

Taken as a whole it is a summary of all of Warburg’s various interests. It has been compared with avant-garde photo montages in form but is something more, perhaps even a “visual archive of European cultural history” (Rampley, 1999). A photograph from an exhibition of Mnemosyne-Atlas plates is shown above. This is from a set on Flickr called aby warburg – the mnemosyne atlas.

Without expecting every user to be a scholar and cultural theorist of Warburg’s stature, I think there is value in supporting linking our catalogue records to Pinterest as it will allow users to relate them to other images and construct different meanings from them.  I feel it’s especially appropriate for Senate House Libraries which includes the library of the The Warburg Institute.

What is different about Pinterest is it makes creation of ‘vision boards’ easy – many sites now support pinning an image to Pinterest, and there are smartphone apps allowing you to pin anything you can photograph.

How to do this in Encore

At Senate House Libraries we have testing a beta version of the next release of our next-generation catalogue (or discovery interface), Encore. Caution! Everything described below links to a beta version of our catalogue that is not yet finished.

Adding a “Pin It” button is made possible by the ability to insert your own Javascript on the bibliographic record display of the new version of the catalogue. To be able to pin a catalogue record to a Pinterest board at minimum we need an image and a link to associate with it; a description of the image is optional. In this case the image is of the book jacket.

Here’s the Javascript to accomplish this, mind any line wrapping and WordPress oddness if you copy and paste it.

<script src="//s7.addthis.com/js/250/addthis_widget.js" type="text/javascript"></script>
<script type="text/javascript">
(function() {
var azImageDiv = document.getElementById("imageAnyComponent_0");
if (azImageDiv) {
if (azImageDiv.width>1 && azImageDiv.height>1) {
// key is a variable Encore uses for checking Google Books. It contains 'ISBN:' plus an ISBN10.
var azAsin = key.substring(5);
var pinterestDiv=document.createElement('div');
pinterestDiv.innerHTML = '<span class="bibInfoHeader">Pinterest</span><div class="addthis_toolbox addthis_default_style" ><p>' + '<a class="addthis_button_pinterest" pi:pinit:url="https://encore.ulrls.lon.ac.uk/iii/encore42/record/C__R' + recordid + '" pi:pinit:media="' + 'http://images-eu.amazon.com/images/P/' + azAsin + '.01._SCLZZZZZZZ_.jpg"' + ' pi:pinit:layout="horizontal"</a></div>';
document.getElementById("customBottom").appendChild(pinterestDiv);
}
}
})();
</script>

Commentary

The challenge is to ensure we only render the Pin It button when we’re confident we have a book jacket image.

First step is to get the imageAnyComponent_0 div and check the size. This div contains the jacket image on Encore and is put there by the catalogue. Amazon returns a 1×1 pixel GIF if it has no jacket to offer, so if the image is larger than this it is probably a jacket image. Having the image is key: if we don’t have it we render nothing.

Assuming we have a jacket image I use the Add This to insert a Pinterest button which will pin a larger version of the jacket image and a link to the catalogue. Add This makes it very easy to deal with various social media buttons with minimal effort, plus it includes analytics information allowing us to judge use of these services on the catalogue. I recommend it.

Getting the ISBN turned out to be easy as the vendor’s Javascript for checking for Google Books previews already declares a variable key containing ‘ISBN:’ plus the ISBN-10 of the book.

Result

Here is how the the Pin It button appears in Encore:

If you use the Pin It button, it results in the creation of a pin like this, which can be found on my (testing!) board Catalog records from@SenateHouseLib:

Problems

I think this is a satisfactory start: comments, improvements and criticism welcome (but especially improvements).

First problem is Add This doesn’t seem to support passing a description for the pinned item. To make sharing as “frictionless” as possible I wanted to the add part of the page title as a description, for example: Senate House Libraries — Love is a dog from hell : poems, 1974-1977 / Charles Bukowski would be fine, and the Pinterest user can edit this during pinning. I added this manually to my pin above. Based on the syntax for the other options above it should be: pi:pinit:description=”description” but that doesn’t work.

Second problem is Amazon images doesn’t support ISBN-13, only ISBN-10. However the Encore catalogue will use the first ISBN that appears in the catalogue record which might be an ISBN-13. Converting from ISBN-13 to ISBN-10 is not a complete solution as although you could pin the item, you won’t see the jacket image in the catalogue in the first place.

Photo credit

Mnemosyne-Atlas boards photographed by Flirck user dzsil, license CC BY-SA.

References

Rampley, M. (1999). ‘Archives of memory: Walter Benjamin’s Arcades project and Aby Warburg’s Mnemosyne Atlas’, in Coles, A. (ed.) The optic of Walter Benjamin. London: Black Dog, pp. 94-119.

Grouse about your next-generation catalogue – LibCamp@Brunel

A journey to the the wild wild west (of London)

On Saturday 28th January I attended LibCamp@Brunel, a library unconference generously hosted by the library at Brunel University in Uxbridge. I’d not been this far west in London as a destination before and on arriving I was pleased to recognise the tube station at Uxbridge as one of Charles Holden’s designs, which I took as a good omen for the day.

At the opening introduction and pitching, I pitched a session about staff perception versus library user perception of  next-generation library catalogues. As the unconference attendees were by and large library workers, I also wanted to invite everyone to come and grouse about problems they’d had with these systems. And let’s be honest, “Grouse about your next-gen catalogue” is going to be fun.

I had modest expectations for this session but it was very well attended, so much so our allotted space was too small and we had to move somewhere roomier. As I was facilitating I couldn’t live-tweet the session and following a few requests from people who couldn’t attend I decided to expand on the points made to give you a flavour of the discussion.

Perceptions of the catalogue

For some time I’ve been trying to understand problems readers have with the catalogue, and had wondered if it was possible to generalise this to talk about staff versus reader perception of Encore and next-generation systems. I hoped we could work towards this in discussion. As well as Encore, Aquabrowser, VUfind, and Summon were mentioned in discussion.

We’ve come a long way. I expected I would have to define next-generation catalogue in the session, but I was delighted when one of the graduate trainees present explained what I call next-gen was simply what she expected from a normal library catalogue. I had to give a really quick potted history of four generations of catalogue interfaces. (This is how to make your systems librarian feel old…)

I explained our experience of implementing Innovative Interfaces Encore at Senate House Library, and particularly how different I have found the perspectives of the library staff versus our readers. To be clear, my colleagues were almost entirely positive towards the new catalogue. I was pushing at an open door implementing a catalogue that offers a much better experience to readers used to using modern Web sites compared with the previous catalogue, relatively little changed since the 2000s.

However, I think it’s important to answer criticism and deal with objections as there could easily be problems I’d overlooked, and there’s a need to have these arguments as one step in bringing people with you.

Andrew, you can’t implement without feature x

In the early days pre-implementation I heard various objections to Encore along the lines of it being feature-incomplete compared with the previous catalogue. Some of my colleagues were hopeful that it would be possible to put off implementing Encore on this basis: we should wait until the next release, or the next-plus-one release, where these issues would be resolved…

It is correct that the new catalogue:

  • Doesn’t generate any left-to-right phrase indexes as our old catalogue did. Everything is indexed as keywords.
  • Doesn’t deal with classmarks for most of our multitude of classification schemes at all. At all. It doesn’t index them as classmarks and doesn’t allow you to browse by classmark.
  • Has fewer options for presenting a ‘scoped’ view of the catalogue limited to just a particular library or collection.
  • In the version we launched with, didn’t offer an advanced search with pre-limits and didn’t support boolean operators at all. (This has been added since.)

Having already done some user testing of the new catalogue I was reasonably confident none of the missing features were a show-stopper for implementation. If there were problems for some readers, we had a simple solution: allow everyone to continue using the old catalogue in parallel to the old one.

One of the Library Campers had pointed out in advanced this is an unusual approach. I explained further in discussion this was partly by necessity as the ‘patron’ features – the ability to log in to view your loans, place a reservation and renew loans – were still based in the old catalogue anyway.

I was asked about how we make sure readers find and use Encore. To drive reader uptake of the new catalogue I wanted to offer Encore as the default option on that places that really matter to us – on the Senate House Library homepage and on the old catalogue homepage. The latter uses some JavaScript to redirect your search depending on what options you select, but if you keep the default ‘Quick Search’ you get Encore. It was important to me that by following the path of least resistance readers would end up with the new catalogue.

I have said before and I stand by it: if you want to buy and implement a new system you should have the courage in your convictions and implement it properly. It amazes me to see libraries that offer their new discovery interfaces as an “alternative search” that can be ignored, or that requires special effort to find and use. I do see the value in doing this during a public beta test or preview, as the British Library did with Primo (branded as Explore the British Library), but absolutely not when you’ve made it live.

As of January 2012 we see slightly more use of the new catalogue in terms of visits, ~56% of the combined total based on Google Analytics data (I said ~50% based on data from Q4 2011 in the session). I consider this a reasonable start.

In the eight months since going live with the new catalogue several types of problem have emerged with Encore.

Longer term: how staff use the catalogue

It’s surprised me how many unusual uses of the old catalogue interface our staff have built up over time and the extent to which the catalogue has taken on functions I wouldn’t expect. For example, making use of the way classmarks are indexed to produce a list of everything from a particular classmark, particularly useful for Special Collections where the classmark might be used to describe what collection something is in. Or a need to produce a list that represents everything related to some sub-set of our catalogue – that is, a search strategy that you can be confident represents 100% true positives!

Much of this has been presented to me in good humour in a playful spirit of showing me how Encore can be “beaten” by a particular use case.

There are uses of the old catalogue that are simply impossible in the Encore catalogue, but my answer is first they don’t tend to represent realistic use cases our readers make, second they can more or less easily be moved to the staff client for our library system. Apart from Encore, Katharine Schopflin and Graham Seaman discussed how next-generation systems can have problems with known item searching and in attempting to present a search interface biased towards too much towards browsing and subject searching can be actively unhelpful when you have specific items in mind. I explained I think Encore is quite good for known item search, in particular the way it prioritises exact hits from MARC field 245 $a, my favourite examples are journals like Text and Agenda.

Generally I don’t think we should aim every discovery tool only at our most expert users, information professionals with great experience with our collections, when they have working alternatives available. I explained in response to a question there is no staff-specific view of Encore if you sign in using a staff account. I think this is right and proper from a “dogfooding” point of view, but I confess I daydream about a catalogue that is this flexible enough to offer a different interfaces with different features for novice to expert as required…

Longer term: you need to sort out your metadata

It’s become a truism that because next-generation systems make better use of our bibliographic data they force us to sort out existing problems with our metadata. We’ve certainly found our fair share of these problems since launching Encore, but not all of them are fixable.

The first we’ve tried to address is the way different types of material were described in our catalogue, the combination of print monographs (er, books) and print periodicals (um, journals) into a single material type termed “printed material”. Cue amused smiles from the Library Campers! Since then we’ve split them into books and journals as I explain on a blog post on our Encore blog – ‘Helping you find print journals more easily’.

The general problem is Encore can only act on the metadata it has available, but realistically you won’t always have time and money to do the work required to make it good. Encore does useful things like provide facets based on geographical names in your subject headings, or dates of publication, or languages. The problem is the data being missing or coded ‘undetermined’.

We know there are some very good items in our collection that are not findable during subject searching by readers because they have a record that’s not very good. Graham Seaman mentioned a problem in Summon in the way dates can be described in different ways, understandable by humans but not machines. For example you could refer to things from the same time period as ’16th century’, ‘1500–1525’, or ‘Renaissance’ and so miss out on relevant items.

These are problems that existed with our old catalogue but which the next-generation catalogue brings into sharper relief.

Towards ethnographies of the next-gen catalogue user

This is the third post in a series exploring user understanding of next-generation catalogues:

Talk

This is posted to coincide with the ChrisMash Mashed Library event organised by Gary Green in London on December 3rd. I spoke about the outcomes of an investigation into user experience and understanding of the next-gen catalogue and next steps we’re taking at Senate House Library. Not very Christmassy I admit…

@preater's presentation on flickr
‘@preater’s presentation’ on Flickr by Paul Stainthorp, license CC-BY-SA.

Slides from this talk are now available:

My slides were kept deliberately simple – it was presented in a pub on a flat screen TV! Notes are included to explain things further. Please get in touch if you want to ask anything about this.

Starting point

We implemented Encore from Innovative Interfaces in June to run alongside and partly replace the older WebPAC Pro catalogue, also from Innovative. Our Encore instance is here; the search I used in my talk was ‘industrial workers of the world‘.

Ahead of implementing we didn’t have much idea about how library users would understand this type of catalogue, so for my masters dissertation I had a look at this using various qualitative methods:

  • Usability-test style cognitive walk-throughs, done almost as a warm-up but providing lots of interesting data. As an aside I think every library should be doing this with their catalogue – it is so quick and easy to do.
  • A semi-structured interview using repertory grid technique. This was very good for comparing what my participants really thought of each type of catalogue.

Key findings

To summarise very briefly:

A Web-like catalogue encourages Web-like behaviour

Putting readers in front of a catalogue interface that looks and behaves like a Web search engine results in behaviours closer to a Web search engine than traditional information retrieval.

By this I mean:

  • A tendency to scan and skim-read Web pages quickly, concentrate on titles.
  • A process of iterative searching based on using a few keywords and then reworking the search over again based on what’s found on the results page.
  • Trust in the relevancy ranking of the catalogue; an expectation that the catalogue should be tolerant of small errors or typos via ‘did you mean…?’ suggestions.
  • The tendency to ‘satisfice’, meaning making do with results that seem good enough for the purpose rather than searching exhaustively.
  • The view that a search queries are an ongoing process, not something that should produce a single perfect set of results.

Caution: this is based on coding qualitative data from nine people and is not intended to be absolute or apply to every user. I found strongly contrasting opinions of the catalogue with an overall tendency for younger readers to take to the new interface much more easily.

The method I used was inductive, that is developed from analysis of what I observed. I really did not expect this ahead of time.

Using our catalogue is an affective experience

I found there was a strongly affective or emotional response to use of our catalogue beyond what you’d think you might get from using a mere lookup tool. The response was about more than just the catalogue being pleasant to use or familiar from other sites.

This was very interesting because I do not see why a library catalogue should not be a joy to use. Why should library catalogues be a painful experience where you have to “pay your dues”? Even if we changed nothing else behind the scenes and made the catalogue more attractive, you could argue this would improve things because we tend to believe more attractive things work better because they’re more enjoyable. Here I am paraphrasing from Don Norman (2004).

Next steps

Usability testing gets us so far, but as I’ve said previously in an artificial “lab” setting it does not produce natural behaviour. That’s a problem because we don’t get to see the reader’s true understanding emerge. We don’t get to see how they really behave in the library when using the catalogue.

I went fairly far in comparing systems – WebPAC Pro versus Encore – but what anchored that testing was the old catalogue. Having implemented the new catalogue and positioned it fairly aggressively as the default interface I wanted to dig deeper and better understand how the catalogue fits in to the reader’s experience of doing research at Senate House Library.

Think about the experience of library use: the reader comes in and experiences an entire “ecology”: the physical building; print book and journal collections; e-resources; the library staff; our catalogues and Web sites. I wanted to better understand how readers experience the catalogue in this context rather than just thinking about it in systems terms as a tool for looking items up that is used with a particular rate of error or success.

Towards ethnographies of the next-gen catalogue user

What we’re going to do is borrow techniques from anthropology to do ethnography in the library. This means studying and observing readers in their habitat: as they work in the library and do their research.

The outcomes I want from this are fairly “soft”, based around our staff knowing the readers better. What I want to know is: how can the library better support our readers’ use of the catalogue and improve their experience of Senate House Library? This is fundamental: I think without better understanding our readers use of our catalogues, we can’t start to improve what we do and provide a better service.

Properly speaking this is more a case of “borrowing ethnographic methods” than “doing ethnography”. This is OK as the methods aren’t owned by one field of social science, as Harry Wolcott (2008) says they “belong to all of us”.

Practically, what want to do is use a battery of techniques including semi-structured interviews, observation, and close questioning to generate data that will allow development of theory from that data as it is analysed qualitatively. This is a grounded theory approach. The actual work will likely be small “micro ethnographies” done over a period of some months in the library.

Examples

In my talk I mentioned some examples of ethnographic research done in libraries, these are:

  • Investigating user understanding of the library Web site – University of North Carolina at Charlotte (Wu and Lanclos, 2011)
  • Looking at how the physical library space is used – Loughborough University (Bryant, 2009)
  • Ethnographies of subject librarian’s reference work – Hewlett Packard Library and Apple Research Library (Nardi and O’Day, 1999)
  • The ERIAL (Ethnographic Research in Illinois Academic Libraries) project which has produced various outputs and has an excellent toolkit telling you how to do it (Asher and Miller, 2011)

References

Asher, A. and Miller, S. (2011) ‘So you want to do anthropology in your library?’ Available at: http://www.erialproject.org/wp-content/uploads/2011/03/Toolkit-3.22.11.pdf

Bryant, J. (2009) ‘What are students doing in our library? Ethnography as a method of exploring library user behaviour’, Library and Information Research, 33 (103), pp. 3-9.

Nardi, B.A. and O’Day, V.L. (1999) Information ecologies. London: MIT Press.

Norman, D.A. (2004) Emotional design. New York, NY: Basic Books.

Wolcott, H.F. (2008) Ethnography: a way of seeing. 2nd edn. Plymouth: AltaMira.

Wu, S.K. and Lanclos, D. (2011) ‘Re-imagining the users’ experience: an ethnographic approach to web usability and space design’, Reference Services Review, 39 (3), pp. 369-389.