Tracking usage of QR codes by smartphone users

Recently I added QR codes to the Senate House Library catalogue hoping to improve user experience for smartphone users. In true “dogfooding” style I have made a lot of use of it myself, but I need to see more data. One thing was missing was any analytics tracking for smartphone users recording these codes and following them into the mobile catalogue.

Tracking QR code use

I realised I could do this by adding parameters to the QR code URLs that would be picked up by Google Analytics: Analytics Help – How do I tag my links?

I tweaked the Javascript generating my markup to insert the required parameters utm_source, utm_medium, and utm_campaign:

  • Campaign Source (utm_source): webpac
  • Campaign Medium (utm_medium): qr
  • Campaign Name (utm_campaign): mobile

Values used can be whatever you want, I’ve tried to keep them short but meaningful. You can then track visitors under Traffic Sources – Sources – Campaigns in Google Analytics.

Adding complexity

Being able to track use of this service is very helpful, but providing more information in the QR code increases the complexity of the code and makes it more “dense” and “busy”. Though I’ve had no problems with this on my phone this could cause problems for older smartphones with lower-resolution cameras. My quick solution is just to bump up the size a bit as this makes the QR code easier for the smartphone to read.

For comparison including a longer URL takes you from this:

With Borges / Alberto Manguel.

To this:

With Borges / Alberto Manguel.

As these QR codes are meant for a mobile phone camera to direct a Web browser to a page, I thought the URL itself need not be “cool”, bookmarkable, or even very human readable. One option is to shorten the URL as it is generated and encode that. Here is the result of shortening with our own shortening service. (Your library does have its own URL shortening service, right?)

With Borges / Alberto Manguel.

That is much nicer! Better than the original link to the mobile catalogue, even.

How to do it

Actually achieving this result in the Millennium ‘classic catalog’ / WebPAC is more difficult. To do the extra step of shortening the URL you will probably need to use the API from your shortening service to first shorten the URL, then you can generate your QR code image. On the WebPAC you’re going to need to do this in Javascript.

In the WebPAC I knew I will run into problems with insecure scripts because our shortener doesn’t have an SSL certificate yet, so this will be just an example. I was able to do it using jQuery and this jQuery plugin jquery-urlshortener.js by James Robert combined with as a shortener.

First add jQuery and jquery-urlshortener.js to your INSERTTAG_INHEAD wwwoption. I put a local copy of jQuery on our server for testing:

<script language="JavaScript" type="text/javascript" src="/screens/qrcode.js"></script><script type='text/javascript' src='/screens/jquery.js'></script><script type='text/javascript' src='/screens/jquery.urlshortener.js'></script>

Add your API key and username to jquery-urlshortener.js.

Update the qrcode.js to request a short URL from using jQuery and use that to generate a QR code using the Google Chart API:

function linkto_catalog_qr() {
    var qrairpacstub = "";
    var qrrecordlink = document.getElementById("recordnum").getAttribute("href");
    var str = qrrecordlink.indexOf("=");
    var qrrecordid = qrrecordlink.substr(str + 1);
    var longurl = '' + qrairpacstub + qrrecordid + '?utm_source=webpac%26utm_medium=qr%26utm_campaign=mobile';
    $.shortenUrl(longurl, function (short_url) {
        document.getElementById('qrcode').innerHTML = '<img src="' + short_url + '" alt="QR code for this record" title="QR code for this record" /><br><a href="">What's this?</a>';

This works as a proof of concept and is enabled on our test / staging port, for example:

With Borges / Alberto Manguel.

I am not so keen on sending thousands of requests to every day and would prefer to use our own shortening service so I’m not making this live just yet.

Thoughts on usability testing the next-gen catalogue

This is the second post in a series exploring user understanding of next-generation catalogues.

What I like about usability testing

I have always found usability testing library systems enjoyable – as a participant, facilitator, and manager – and gotten useful things out of it. My preferred style is Steve Krug’s “Lost our lease, going-out-of-business-sale usability testing” from Don’t make me think (2006) with about five subjects and a very focused idea about what I wanted to get out of the process. By that I mean specific problems that needed user feedback to inform our judgments.

What I like best about this method is it represents effective action you can take quickly on a shoestring. You can short-circuit the endless librarians-around-a-table discussions you can get into about Web stuff: let’s test this out rather than just talking about it! I have defended using this method with small groups, as even testing a few users tells you something about what your users are actually doing whereas testing no-one tells you nothing at all. In writing that I realised I was paraphrasing Jakob Nielsen, “Zero users give zero insights”.

We’ll likely employ this method when we rework the Senate House Library Web site next year.

What I don’t

I think there are some problems with this style of testing as a methodology so have been looking into other methods for investigating Encore.

My main problem is the artificial nature of the test. Putting a person in your usability “lab” with a camera recording and asking them to do various tasks does not produce a natural experience of using your library catalogue. Your methods of observing the test will alter the users behaviour: these are observer effects you cannot hope to control for. In my dissertation interviews I tried to temper this by focusing on subject searching, browsing, and exploration of the next-generation catalogue interface rather than asking for subjects to complete tasks. I used a form of close questioning to explore participants’ understanding of Encore. This relies on asking probing questions along the lines of:

  • How?
  • Why?
  • What?
  • What if?

Ultimately this is based on a contextual inquiry approach described by Beyer and Holtzblatt in Contextual design (1998), but done with the understanding that it was taking place in an artificial environment not “the field”.

In truth the usability testing-style part of the investigation was meant as a warm-up towards comparisons between two or more catalogues using the repertory grid technique. I thought this worked reasonably well. The usability test section yielded up a good deal of qualitative data and certainly worked to get participants to the right frame of mind for grid construction.

It also produced useful results about for tweaks we could make to improve Encore as a better fit to readers’ expectations of a library catalogue. That is, it worked as usability testing.

However as I did the work I was aware of the artificial nature of the process affecting how my subjects behaved and their problems engaging with the process in anything like a natural way. The cognitive walkthrough style is difficult on two levels: it feels odd and a bit embarrassing to do it as a subject, but also it makes you think about what you are doing and how you should express yourself which affects your behaviour. Several participants picked up on this during their interviews and criticised it.

I’ve found our readers experience of the catalogue is deeply affective, and think we need to dig deeper into that affective layer to understand the user experience. I think ethnographic methods like the contextual inquiry approach is the way to go here, and will return to this in my next post.

Final point. I know our vendor has done their own usability testing on the Encore interface including informing changes to the current look and feel, in use on our catalogue. I have no reason to doubt its effectiveness or rigour. We could do usability testing of Encore, but I doubt we would add much beyond what the vendor already knows.


Beyer, H. and Holtzblatt, K. (1998) Contextual design. London: Morgan Kaufmann.

Krug, S. (2006) Don’t make me think. 2nd edn. Berkeley, CA: New Riders.

User feedback and problems with the next-gen catalogue

This is the first post in a series exploring user understanding of next-generation catalogues.

Our situation

We made our next-generation catalogue / discovery interface, Encore by Innovative Interfaces live in June 2011. Since then I’ve been trying to better understand the causes of the problems readers have with it.

As a starting point I’ve been doing this through the lens of the mental models theory; but I’m trying not to see every problem in terms of one particular theory just because that’s what I’m looking for. Sometimes a missing feature is just a missing feature, to paraphrase something attributed to Freud.

I’d expected some experienced users would find problems moving to a next-gen catalogue because their “bibliographic retrieval” mental model fitted to a traditional library catalogue would not fit so well to a “web search” style catalogue. To view this in reverse, and much more fairly blame the catalogue than the user: I had thought the next-gen catalogue in trying to be like a Web search engine would cause some problems for experienced users. I’m looking at this as mental models failure or mismatch, not implying it’s people not wanting to change.

I expected problems would surface easily as we positioned Encore aggressively, making it the default search (named Quick Search) when you visit the Senate House Library catalogue front page. This was meant as a nudge: you can select the old WebPAC catalogue but you’re not offered it as the default and it’s a little bit of effort to choose it.

Because of this my staff training for Encore focused on helping staff better explain how Encore works, with a view to building better models in the minds of readers.

User feedback

I’ve been gathering feedback reported via staff, Twitter and Facebook, and our online feedback form. Broadly they fit into these categories:

  • A general I like it (~10%) / I don’t like it or really don’t like it (~20%)
  • I can’t work out how to do x like you can in the old catalogue / It lacks feature x the previous catalogue has (~20%).
  • Suggestion for an enhancement (my personal favourite) (~9%)
  • Questions or feedback not about Encore (~40%)

The numbers of comments not about Encore suggests the first thing I should do is put an easy-to-find “Ask us a question” and a “Report problem with this record” link on each page! We don’t have that on the old catalogue so I expect we’re missing out on picking up potential enquiries there.

A good chunk of problems related to “I can’t work out how to do x” represent application of mental models from the old catalogue onto the new catalogue. For example an expectation of being able to browse based on phrase indexing of fields like title. It simply doesn’t exist in Encore, and it can be baffling if this is what you expected. I also got some interesting comments about the look and feel of Encore as “cluttered” or “busy”, which affected the user’s perception of the catalogue functionality way beyond what you’d expect. Innovative have since released a new Encore skin which is subjectively much cleaner and pretty much nails that problem.

I’ve been pleasantly surprised to receive positive comments at all as my expectation in a customer service situation is people are more likely to spend the time if they want to say something negative. I think twitter helps with this as it’s much easier to say something immediately by microblogging than marshalling your thoughts and filling in a ponderous official-looking form. Overall I’m happy to get a 1:2 ratio of positive to negative, and of course each negative comment is an opportunity to say something about Encore and better explain it.

By the way, some of the positive comments are wonderful such as this tweet:

Next steps

It’s useful to get any feedback about what you’re doing, but passively collecting data is not going to get us where we need to be. The question I want to answer is along the lines of: how can the library make this catalogue better support readers and improve their experience of Senate House Libraries? This is going to be more complex than answering usability-type questions about the Encore interface versus the old WebPAC, or comparing the difference in tweaking around the edges of the Encore configuration. Not to say I haven’t done plenty of that already…

To do this we need to actively gather data on our readers’ experiences with Encore and how they make use of it during information seeking. More to come on this later.