“UX for the win!” at #CityMash: open and focused coding of qualitative research data for discovery user experience

In Library Services at Imperial College London, between January and April 2015 my team completed two iterations of user experience testing of our Ex Libris Primo discovery system with a view to redeveloping the user interface to provide improved an user experience.

For the #CityMash¬†Mashed Library unconference, Karine Larose and I¬†are running a¬†workshop on the¬†methods we used in our second iteration of testing.¬†Rather than run a ‘show and tell’ about our approach, the workshop will¬†provide¬†experience using¬†our methods with some of our data in a similar¬†way to how we¬†conducted the research ourselves.¬†We¬†will provide hands-on¬†experience¬†of these methods, attempt to demystify the approaches used, and hope to demonstrate how exciting we find the¬†professional¬†praxis of systems librarianship.

This blog post explains the background and provides a practical overview and some theoretical scaffolding ahead of #CityMash. What we present is just one approach and all methods are flawed; we are extremely interested in hearing comments on or objections to our methodology around discovery user experience.

Acknowledgement

We’d like to acknowledge the hard work of¬†George Bray, Master’s student at¬†UCL Department of Information Studies, during a work placement with¬†our team.¬†George¬†designed and undertook much of¬†this testing during his work placement, based on our overall guidance,¬†and we would not have been able to produce what we¬†did without him¬†with us.

Why we use constructivist grounded theory

The methods we chose for our¬†user experience research were¬†qualitative and post-positivist.¬†They are based ideas¬†developed by Barney Glaser and Anselm Strauss¬†(1967) in their classic (and arguably classical, read on…)¬†The discovery of grounded theory. Grounded theory includes:

  • Data collection and analysis as a simultaneous process
  • Analytically constructing “codes” and categories from data itself
  • The “constant¬†comparative method” of comparing existing and new data in¬†an ongoing process
  • Developing theory¬†during each stage of¬†data collection and analysis
  • Sampling to aid building theory, rather than being representative of the¬†population
  • In pure grounded theory, the¬†literature review comes after the analysis

This list is paraphrased from Charmaz (2012; 2014 p. 7).

The above may sound¬†unusual to those with experience of¬†more quantitative methods, and the idea of the literature review coming last may¬†sound unusual to¬†everyone. Bear with me.¬†If you are interested in reading more I¬†don’t necessarily recommend Glaser & Strauss as¬†a first step. For an introduction to grounded theory at LIS Masters’s level, there is a chapter in the second edition of Alison¬†Pickard’s Research methods in information¬†(2013)¬†which provides a detailed and readable outline.

Our touchstone work has been Kathy Charmaz’s Constructing grounded theory¬†(2014) where¬†she explains a¬†constructivist approach to grounded theory.¬†Core to her¬†ideas are¬†the acknowledgement of subjectivity and¬†relativity in the research process, and¬†a drive towards abstract understanding of observed phenomena¬†within the specific circumstances of the research (Charmaz, 2008) which¬†particularly resonated with us¬†doing discovery research.

Charmaz is no ideologue, for her different traditions in¬†grounded theory represent¬†a¬†‚ÄúConstellation of methods‚ÄĚ (2014 p. 14) rather than binary opposition.¬†We have¬†drawn¬†on¬†elements from both the empirical¬†interpretivist¬†grounded theory tradition, constructivist grounded theory, and the critical theory¬†approaches that inform my thinking elsewhere in¬†LIS.¬†These are the differences we understand:

Objectivist grounded theory Constructivist grounded theory
Theory ’emerges’ from the data Researchers construct categories from the data
Researchers develop generalizations and explanations out of context Researchers aim to create an interpretive understanding accounting for context
The researcher’s voice has priority The participant’s voice is integral to analysis and presentation

What does this mean for user experience work?

You can see how a constructivist¬†approach will focus on the voice of the user as an integral feature in¬†understanding and¬†presenting data.¬†In my team (and I hope in your team) user experience work has never been informed by the “Librarian knowing best”, but this approach provides a particular emphasis.¬†My¬†experience is the¬†voice of the user, seeing her context and affective responses, is¬†a powerful way of¬†making the case for making¬†changes to¬†our systems. This presentation can be¬†extremely eye-opening even for those who work day-to-day in user-facing roles¬†and know our users well.

We definitely did want to inductively develop theory from our data, but we wanted to be mindful of the user’s context and be interpretive, as we¬†know¬†our discovery system is just one part of a complex and shifting¬†information landscape our users inhabit.¬†We use the¬†iterative and analytical approach of¬†coding, and codes¬†necessarily result from the researcher and data¬†interacting (Charmaz, 2012). However¬†our focus is wherever possible on¬†trying to analyse¬†the data rather than describe it. Ideally this should happen from the¬†first moments of coding; more on this below.

Fundamental to constructivist grounded theory, the resulting ideas we develop are based on our interpretation of data and as researchers we cannot stand ‘outside’ that interpretation. What we create¬†from the data is¬†based¬†on conceptualizing what we have studied and observed in user behaviours: we must stand inside and ‘own’ our analyses which will be affected by our¬†biases, our preconceptions,¬†and the¬†emotional investment in the work we do.

This is not unprofessional, but an acknowledgement of the shared humanity of the researcher and the participant, and of the value of our work experience as practitioners that¬†allows us to critically reflect¬†on and develop theories of¬†practice. To balance our subjectivity as researchers, a key part of the constructivist process has been to critically reflect on our preconceptions¬†about discovery, information literacy, and users’ behaviour and expectations of doing their research using the tools we provide.

Working with qualitative data for user experience research

We are doing analysis of qualitative data collected during interviews to investigate Primo user experience. Ahead of interviewing proper, we held planning meetings with Library Services staff drawn from all sections of the library to work through starting points: primarily, what we wanted to get from the interviewing process, and what we wanted to know by the end of this round of investigation.

Extensive notes of these workshops were taken, and used by George to provide an initial focus for our interviewing. These are not quite research questions, but areas to focus on. These were:

  • The purpose, construction, and use of search and resources
  • Presentation of information in search: what matters to the user when selecting the right result?

Following this George and Karine developed an interview script¬†for use by facilitators. This included general questions about information seeking as well as some specific¬†tasks to carry out on Primo. This interview is structured and in grounded theory¬†ideally would be based around open questions, helping us¬†as researchers unpick meaning and move towards answering “why” questions in our analysis. We used a mixture of questions and posing¬†specific tasks for users to¬†complete.¬†Our interview¬†script is available:¬†Primo UX Interview questions June 2015¬†(PDF).

In practice interviewers have different styles, and some facilitators stuck more closely to the script than others. This is not necessarily a problem, remember as an observer you are free to suggest places where we need to run another iteration and gather more data.

Our¬†research data comprises the audiovisual recordings and the facilitator’s notes. The notes¬†help understand¬†the facilitator’s perspective on the interview and provide useful observations.

For #CityMash, we are providing a recording of the first part of an interview. In the full interviews at Imperial we did longer interviews making use of other methods drawn from web usability testing. The #CityMash data does not contain these. We gained informed consent for participant interview recordings and our written notes to be used for presentation and data analysis at #CityMash.

#CityMash technical requirements

  • You will need at least a tablet, ideally a laptop, to watch and listen to the¬†audiovisual recordings. A smartphone screen will likely not be big enough to see what’s going on. Headphones are¬†ideal but are not entirely necessary.
  • Sharing a device with another delegate is possible.¬†Coding together and sharing your observations and thoughts as you go in a negotiated process¬†would provide¬†an interesting alternative to doing this on your own.
  • You will need a way of recording your coding and writing memos and any other notes.¬†Any text editor, word processor, or pen and paper will work fine. (At Imperial College to facilitate collaborative coding, sharing, and to save time, we just write¬†directly in¬†our staff¬†wiki.)

Beginning the process of open coding

Charmaz’s (2014, p. 116) guidance is that during initial or open coding, we ask:

  • What is this data a study of?
  • What do the data suggest? [What do they p]ronounce? [What do they l]eave unsaid?
  • From whose point of view?
  • What theoretical category does this specific [data] indicate?

Grounded theory textbooks often give examples of coding¬†based on narrative such as diaries or written accounts and show example codes side-by-side with this. We are using audiovisual recordings instead, but the process is similar: listen to¬†each statement and sentence spoken and¬†the user’s behaviour as you go through the video and code piece-by-piece. Try to¬†“sweep” through the data fairly quickly rather than spending too much time on each code.¬†You will get better and faster at this¬†as you go.

For codes themselves, try starting by writing down short analytic observations about the data as you experience it.¬†Codes should “result¬†from what strikes you in the data” (Charmaz, 2012) and should be “short, simple, active, and analytic” (Charmaz, 2014 p. 120 ).¬†Remember you’re trying to be analytical about what you see, not just record what is happening.

Charmaz’s (2014 p. 120) ‘code for coding’¬†is:

  • Remain open
  • Stay close to the data
  • Keep your codes simple and precise
  • Construct short codes
  • Preserve actions
  • Compare data with data
  • Move¬†quickly¬†through the data

Keep the facilitator’s notes alongside you¬†and try to understand how these relate to what she saw and understood in the interview.

Don’t worry about¬†being perfect the first time. Coding is¬†iterative and you are allowed to go back and¬†rework things, and make new connections between data. Initial codes are provisional, and working¬†quickly both forces you be to spontaneous and gives more time to¬†go back and iterate over the data again.

It is very difficult, but try to¬†put your favourite theoretical “lens” to one side during¬†initial¬†coding. It’s perfectly fine¬†to bring in these ideas later, but for open coding you are trying to spark thoughts and bring out new ideas¬†from the data rather than¬†apply someone else’s grand theory.

Focused coding: refining data to begin to develop theory

Our #CityMash workshop is limited in time so we will do an initial round of open coding followed by small group discussion exploring focused coding.

Focused coding is the process of analyzing and assessing your first round of codes, and as a guide it should be a reasonably fast process. You are looking for connections and relationships between codes, and comparing them with the data and with each other. Looking at particular pairs of codes, which work better as overall analytical categories? Which give a better direction in developing an overall theory from the data?

Think about how you might create a theoretical framework later about discovery user experience to help inform changes to the system. Which codes better fit the data in allowing you to do this?

Charmaz (2014, pp. 140-151.) poses the following questions to help make choices about focused coding:

  • What do you find when you compare your initial codes with data?
  • In which ways¬†might your initial codes reveal patterns?
  • Which of these codes best account for¬†the data?
  • Have you raised these codes to focused codes?
  • What do you comparisons between codes indicate?
  • Do your¬†focused codes reveal¬†gaps in¬†the data?

The results of George’s¬†analysis of our focused coding was written up into a¬†summary report of the things we needed to concentrate on in redeveloping our¬†Primo interface. The systems team is currently working on¬†Primo back-end configuration and front-end design to fulfill this, and these findings will be the subject of an upcoming blog post.

#CityMash slides

Our slides from our #CityMash talk are also available.

References

Charmaz, K. (2008) ‘Constructionism and the grounded theory method’, in Holstein, J.A. & Gubrium, J.F. (eds.), Handbook of constructionist research. New York, NY: Guilford Press,¬†pp. 397-412.

Charmaz, K. (2012) ‘The power and potential of grounded theory’,¬†Medical Sociology Online, 6(3), pp. 2-15.¬†Available at:¬†http://www.medicalsociologyonline.org/resources/Vol6Iss3/MSo-600x_The-Power-and-Potential-Grounded-Theory_Charmaz.pdf (Accessed: 11 June 2015).

Charmaz, K. (2014) Constructing grounded theory. 2nd edn. London: Sage

Glaser, B.G. & Strauss, A.L. (1967) The discovery of grounded theory. Chicago, IL: de Gruyter

Pickard, A.J. (2013) Research methods in information. 2nd edn. London: Facet.

Towards ethnographies of the next-gen catalogue user

This is the third post in a series exploring user understanding of next-generation catalogues:

Talk

This is posted to coincide with the ChrisMash Mashed Library event organised by Gary Green in London on December 3rd. I spoke about the outcomes of an investigation into user experience and understanding of the next-gen catalogue and next steps we’re taking at Senate House Library. Not very Christmassy I admit‚Ķ

@preater's presentation on flickr
‘@preater’s presentation’ on Flickr by Paul Stainthorp, license CC-BY-SA.

Slides from this talk are now available:

My slides were kept deliberately simple – it was presented in a pub on a flat screen TV! Notes are included to explain things further. Please get in touch if you want to ask anything about this.

Starting point

We implemented Encore from Innovative Interfaces in June to run alongside and partly replace the older WebPAC Pro catalogue, also from Innovative. Our Encore instance is here; the search I used in my talk was ‘industrial workers of the world‘.

Ahead of implementing we didn’t have much idea about how library users would understand this type of catalogue, so for my masters dissertation I had a look at this using various qualitative methods:

  • Usability-test style cognitive walk-throughs, done almost as a warm-up but providing lots of interesting data. As an aside I think every library should be doing this with their catalogue ‚Äď it is so quick and easy to do.
  • A semi-structured interview using Repertory Grid technique. This was very good for comparing what my participants really thought of each type of catalogue.

Key findings

To summarise very briefly:

A Web-like catalogue encourages Web-like behaviour

Putting readers in front of a catalogue interface that looks and behaves like a Web search engine results in behaviours closer to a Web search engine than traditional information retrieval.

By this I mean:

  • A tendency to scan and skim-read Web pages quickly, concentrate on titles.
  • A process of iterative searching based on using a few keywords and then reworking the search over again based on what’s found on the results page.
  • Trust in the relevancy ranking of the catalogue; an expectation that the catalogue should be tolerant of small errors or typos via ‘did you mean…?’ suggestions.
  • The tendency to ‘satisfice’, meaning making do with results that seem good enough for the purpose rather than searching exhaustively.
  • The view that a search queries are an ongoing process, not something that should produce a single perfect set of results.

Caution! This is based on coding qualitative data from nine people and is not intended to be absolute or apply to every user. I found strongly contrasting opinions of the catalogue with a tendency for younger readers to take to the new interface much more easily.

The method I used was inductive, that is developed from analysis of what I observed: I really did not expect this ahead of time.

Using our catalogue is an affective experience

I found there was a strongly affective or emotional response to use of our catalogue beyond what you’d think you might get from using a mere lookup tool. The response was about more than just the catalogue being pleasant to use or familiar from other sites.

This was very interesting because I do not see why a library catalogue should not be a joy to use. Why should library catalogues be a painful experience where you have to “pay your dues”? Even if we changed nothing else behind the scenes and made the catalogue more attractive, you could argue this would improve things because we tend to believe more attractive things work better because they’re more enjoyable. Here I am paraphrasing from Don Norman (2004).

Next steps

Usability testing gets us so far, but as I’ve said previously in an artificial “lab” setting it does not produce natural behaviour. That’s a problem because we don’t get to see the reader’s true understanding emerge. We don’t get to see how they really behave in the library when using the catalogue.

I went fairly far in comparing systems – WebPAC Pro versus Encore – but what anchored that testing was the old catalogue. Having implemented the new catalogue and positioned it fairly aggressively as the default interface I wanted to dig deeper and better understand how the catalogue fits in to the reader’s experience of doing research at Senate House Library.

Think about the experience of library use: the reader comes in and experiences an entire “ecology”: the physical building; print book and journal collections; e-resources; the library staff; our catalogues and Web sites. I wanted to better understand how readers experience the catalogue in this context rather than just thinking about it in systems terms as a tool for looking items up that is used with a particular rate of error or success.

Towards ethnographies of the next-gen catalogue user

What we’re going to do is borrow techniques from anthropology to do ethnography in the library. This means studying and observing readers in their habitat: as they work in the library and do their research.

The outcomes I want from this are fairly “soft”, based around our staff knowing the readers better. What I want to know is: how can the library better support our readers’ use of the catalogue and improve their experience of Senate House Library? This is fundamental: I think without better understanding our readers use of our catalogues, we can’t start to improve what we do and provide a better service.

Properly speaking this is more a case of “borrowing ethnographic methods” than “doing ethnography”. This is OK as the methods aren’t owned by one field of social science, as Harry Wolcott (2008) says they “belong to all of us”.

Practically, what want to do is use a battery of techniques including semi-structured interviews, observation, and close questioning to generate data that will allow development of theory from that data as it is analysed qualitatively. This is a grounded theory approach. The actual work will likely be small “micro ethnographies” done over a period of some months in the library.

Examples

In my talk I mentioned some examples of ethnographic research done in libraries, these are:

  • Investigating user understanding of the library Web site – University of North Carolina at Charlotte (Wu and Lanclos, 2011)
  • Looking at how the physical library space is used – Loughborough University (Bryant, 2009)
  • Ethnographies of subject librarian’s reference work – Hewlett Packard Library and Apple Research Library (Nardi and O’Day, 1999)
  • The ERIAL (Ethnographic Research in Illinois Academic Libraries) project which has produced various outputs and has an excellent toolkit telling you how to do it (Asher and Miller, 2011)

References

Asher, A. and Miller, S. (2011) ‘So you want to do anthropology¬†in your library?’ Available at:¬†http://www.erialproject.org/wp-content/uploads/2011/03/Toolkit-3.22.11.pdf

Bryant, J. (2009) ‘What are students doing in our library? Ethnography as a method of exploring library user behaviour’, Library and Information Research, 33 (103), pp. 3-9.

Nardi, B.A. and O’Day, V.L. (1999) Information ecologies. London: MIT Press.

Norman, D.A. (2004) Emotional design. New York, NY: Basic Books.

Wolcott, H.F. (2008) Ethnography: a way of seeing. 2nd edn. Plymouth: AltaMira.

Wu, S.K. and Lanclos, D. (2011) ‘Re-imagining the users’ experience: an ethnographic approach to web usability and space design’, Reference Services Review, 39 (3), pp. 369-389.

Thoughts on usability testing the next-gen catalogue

This is the second post in a series exploring user understanding of next-generation catalogues.

What I like about usability testing

I have always found usability testing library systems enjoyable – as a participant, facilitator, and manager – and gotten useful things out of it. My preferred style is Steve Krug’s “Lost our lease, going-out-of-business-sale usability testing” from Don’t make me think (2006)¬†with about five subjects and a very focused idea about what I wanted to get out of the process. By that I mean specific problems that needed user feedback to inform our judgments.

What I like best about this method is it represents effective action you can take quickly on a shoestring. You can short-circuit the endless librarians-around-a-table discussions you can get into about Web stuff: let’s test this out rather than just talking about it! I have defended using this method with small groups, as even testing a few users tells you something about what your users are actually doing whereas testing no-one tells you nothing at all. In writing that I realised I was paraphrasing Jakob Nielsen, “Zero users give zero insights”.

We’ll likely employ this method when we rework the Senate House Library Web site next year.

What I don’t

I think there are some problems with this style of testing as a methodology so have been looking into other methods for investigating Encore.

My main problem is the artificial nature of the test. Putting a person in your usability “lab” with a camera recording and asking them to do various tasks does not produce a natural experience of using your library catalogue. Your methods of observing the test will alter the users behaviour: these are observer effects you cannot hope to control for. In my dissertation interviews I tried to temper this by focusing on subject searching, browsing, and exploration of the next-generation catalogue interface rather than asking for subjects to complete tasks. I used a form of close questioning to explore participants’ understanding of Encore. This relies on asking probing questions along the lines of:

  • How?
  • Why?
  • What?
  • What if?

Ultimately this is based on a contextual inquiry approach described by Beyer and Holtzblatt in Contextual design (1998), but done with the understanding that it was taking place in an artificial environment not “the field”.

In truth the usability testing-style part of the investigation was meant as a warm-up towards comparisons between two or more catalogues using the repertory grid technique. I thought this worked reasonably well. The usability test section yielded up a good deal of qualitative data and certainly worked to get participants to the right frame of mind for grid construction.

It also produced useful results about for tweaks we could make to improve Encore as a better fit to readers’ expectations of a library catalogue. That is, it worked as usability testing.

However as I did the work I was aware of the artificial nature of the process affecting how my subjects behaved and their problems engaging with the process in anything like a natural way. The cognitive walkthrough style is difficult on two levels: it feels odd and a bit embarrassing to do it as a subject, but also it makes you think about what you are doing and how you should express yourself which affects your behaviour. Several participants picked up on this during their interviews and criticised it.

I’ve found our readers experience of the catalogue is deeply affective, and think we need to dig deeper into that affective layer to understand the user experience. I think ethnographic methods like the contextual inquiry approach is the way to go here, and will return to this in my next post.

Final point. I know our vendor has done their own usability testing on the Encore interface including informing changes to the current look and feel, in use on our catalogue. I have no reason to doubt its effectiveness or rigour. We could do usability testing of Encore, but I doubt we would add much beyond what the vendor already knows.

References

Beyer, H. and Holtzblatt, K. (1998) Contextual design. London: Morgan Kaufmann.

Krug, S. (2006) Don’t make me think. 2nd edn. Berkeley, CA: New Riders.

User feedback and problems with the next-gen catalogue

This is the first post in a series exploring user understanding of next-generation catalogues.

Our situation

We made our next-generation catalogue / discovery interface, Encore by Innovative Interfaces live in June 2011. Since then I’ve been trying to better understand the causes of the problems readers have with it.

As a starting point I’ve been doing this through the lens of the mental models theory; but I’m trying not to see every problem in terms of one particular theory just because that’s what I’m looking for. Sometimes a missing feature is just a missing feature, to paraphrase something attributed to Freud.

I’d expected some experienced users would find problems moving to a next-gen catalogue because their “bibliographic retrieval” mental model fitted to a traditional library catalogue would not fit so well to a “web search” style catalogue. To view this in reverse, and much more fairly blame the catalogue than the user: I had thought the next-gen catalogue in trying to be like a Web search engine would cause some problems for experienced users. I’m looking at this as mental models failure or mismatch, not implying it’s people not wanting to change.

I expected problems would surface easily as we positioned Encore aggressively, making it the default search (named Quick Search) when you visit the Senate House Library catalogue front page. This was meant as a nudge: you can select the old WebPAC catalogue but you’re not offered it as the default and it’s a little bit of effort to choose it.

Because of this my staff training for Encore focused on helping staff better explain how Encore works, with a view to building better models in the minds of readers.

User feedback

I’ve been gathering feedback reported via staff, Twitter and Facebook, and our online feedback form. Broadly they fit into these categories:

  • A general I like it (~10%) / I don’t like it or really don’t like it (~20%)
  • I can’t work out how to do x like you can in the old catalogue / It lacks feature x the previous catalogue has (~20%).
  • Suggestion for an enhancement (my personal favourite) (~9%)
  • Questions or feedback not about Encore (~40%)

The numbers of comments not about Encore suggests the first thing I should do is put an easy-to-find “Ask us a question” and a “Report problem with this record” link on each page! We don’t have that on the old catalogue so I expect we’re missing out on picking up potential enquiries there.

A good chunk of problems related to “I can’t work out how to do x” represent application of mental models from the old catalogue onto the new catalogue. For example an expectation of being able to browse based on phrase indexing of fields like title. It simply doesn’t exist in Encore, and it can be baffling if this is what you expected. I also got some interesting comments about the look and feel of Encore as “cluttered” or “busy”, which affected the user’s perception of the catalogue functionality way beyond what you’d expect. Innovative have since released a new Encore skin which is subjectively much cleaner and pretty much nails that problem.

I’ve been pleasantly surprised to receive positive comments at all as my expectation in a customer service situation is people are more likely to spend the time if they want to say something negative. I think twitter helps with this as it’s much easier to say something immediately by microblogging than marshalling your thoughts and filling in a ponderous official-looking form. Overall I’m happy to get a 1:2 ratio of positive to negative, and of course each negative comment is an opportunity to say something about Encore and better explain it.

By the way, some of the positive comments are wonderful such as this tweet:

Next steps

It’s useful to get any feedback about what you’re doing, but passively collecting data is not going to get us where we need to be. The question I want to answer is along the lines of: how can the library make this catalogue better support readers and improve their experience of Senate House Libraries? This is going to be more complex than answering usability-type questions about the Encore interface versus the old WebPAC, or comparing the difference in tweaking around the edges of the Encore configuration. Not to say I haven’t done plenty of that already…

To do this we need to actively gather data on our readers’ experiences with Encore and how they make use of it during information seeking. More to come on this later.