In Library Services at Imperial College London, between January and April 2015 my team completed two iterations of user experience testing of our Ex Libris Primo discovery system with a view to redeveloping the user interface to provide improved an user experience.
For the #CityMash Mashed Library unconference, Karine Larose and I are running a workshop on the methods we used in our second iteration of testing. Rather than run a ‘show and tell’ about our approach, the workshop will provide experience using our methods with some of our data in a similar way to how we conducted the research ourselves. We will provide hands-on experience of these methods, attempt to demystify the approaches used, and hope to demonstrate how exciting we find the professional praxis of systems librarianship.
This blog post explains the background and provides a practical overview and some theoretical scaffolding ahead of #CityMash. What we present is just one approach and all methods are flawed; we are extremely interested in hearing comments on or objections to our methodology around discovery user experience.
We’d like to acknowledge the hard work of George Bray, Master’s student at UCL Department of Information Studies, during a work placement with our team. George designed and undertook much of this testing during his work placement, based on our overall guidance, and we would not have been able to produce what we did without him with us.
Why we use constructivist grounded theory
The methods we chose for our user experience research were qualitative and post-positivist. They are based ideas developed by Barney Glaser and Anselm Strauss (1967) in their classic (and arguably classical, read on…) The discovery of grounded theory. Grounded theory includes:
- Data collection and analysis as a simultaneous process
- Analytically constructing “codes” and categories from data itself
- The “constant comparative method” of comparing existing and new data in an ongoing process
- Developing theory during each stage of data collection and analysis
- Sampling to aid building theory, rather than being representative of the population
- In pure grounded theory, the literature review comes after the analysis
This list is paraphrased from Charmaz (2012; 2014 p. 7).
The above may sound unusual to those with experience of more quantitative methods, and the idea of the literature review coming last may sound unusual to everyone. Bear with me. If you are interested in reading more I don’t necessarily recommend Glaser & Strauss as a first step. For an introduction to grounded theory at LIS Masters’s level, there is a chapter in the second edition of Alison Pickard’s Research methods in information (2013) which provides a detailed and readable outline.
Our touchstone work has been Kathy Charmaz’s Constructing grounded theory (2014) where she explains a constructivist approach to grounded theory. Core to her ideas are the acknowledgement of subjectivity and relativity in the research process, and a drive towards abstract understanding of observed phenomena within the specific circumstances of the research (Charmaz, 2008) which particularly resonated with us doing discovery research.
Charmaz is no ideologue, for her different traditions in grounded theory represent a “Constellation of methods” (2014 p. 14) rather than binary opposition. We have drawn on elements from both the empirical interpretivist grounded theory tradition, constructivist grounded theory, and the critical theory approaches that inform my thinking elsewhere in LIS. These are the differences we understand:
|Objectivist grounded theory||Constructivist grounded theory|
|Theory ‘emerges’ from the data||Researchers construct categories from the data|
|Researchers develop generalizations and explanations out of context||Researchers aim to create an interpretive understanding accounting for context|
|The researcher’s voice has priority||The participant’s voice is integral to analysis and presentation|
What does this mean for user experience work?
You can see how a constructivist approach will focus on the voice of the user as an integral feature in understanding and presenting data. In my team (and I hope in your team) user experience work has never been informed by the “Librarian knowing best”, but this approach provides a particular emphasis. My experience is the voice of the user, seeing her context and affective responses, is a powerful way of making the case for making changes to our systems. This presentation can be extremely eye-opening even for those who work day-to-day in user-facing roles and know our users well.
We definitely did want to inductively develop theory from our data, but we wanted to be mindful of the user’s context and be interpretive, as we know our discovery system is just one part of a complex and shifting information landscape our users inhabit. We use the iterative and analytical approach of coding, and codes necessarily result from the researcher and data interacting (Charmaz, 2012). However our focus is wherever possible on trying to analyse the data rather than describe it. Ideally this should happen from the first moments of coding; more on this below.
Fundamental to constructivist grounded theory, the resulting ideas we develop are based on our interpretation of data and as researchers we cannot stand ‘outside’ that interpretation. What we create from the data is based on conceptualizing what we have studied and observed in user behaviours: we must stand inside and ‘own’ our analyses which will be affected by our biases, our preconceptions, and the emotional investment in the work we do.
This is not unprofessional, but an acknowledgement of the shared humanity of the researcher and the participant, and of the value of our work experience as practitioners that allows us to critically reflect on and develop theories of practice. To balance our subjectivity as researchers, a key part of the constructivist process has been to critically reflect on our preconceptions about discovery, information literacy, and users’ behaviour and expectations of doing their research using the tools we provide.
Working with qualitative data for user experience research
We are doing analysis of qualitative data collected during interviews to investigate Primo user experience. Ahead of interviewing proper, we held planning meetings with Library Services staff drawn from all sections of the library to work through starting points: primarily, what we wanted to get from the interviewing process, and what we wanted to know by the end of this round of investigation.
Extensive notes of these workshops were taken, and used by George to provide an initial focus for our interviewing. These are not quite research questions, but areas to focus on. These were:
- The purpose, construction, and use of search and resources
- Presentation of information in search: what matters to the user when selecting the right result?
Following this George and Karine developed an interview script for use by facilitators. This included general questions about information seeking as well as some specific tasks to carry out on Primo. This interview is structured and in grounded theory ideally would be based around open questions, helping us as researchers unpick meaning and move towards answering “why” questions in our analysis. We used a mixture of questions and posing specific tasks for users to complete. Our interview script is available: Primo UX Interview questions June 2015 (PDF).
In practice interviewers have different styles, and some facilitators stuck more closely to the script than others. This is not necessarily a problem, remember as an observer you are free to suggest places where we need to run another iteration and gather more data.
Our research data comprises the audiovisual recordings and the facilitator’s notes. The notes help understand the facilitator’s perspective on the interview and provide useful observations.
For #CityMash, we are providing a recording of the first part of an interview. In the full interviews at Imperial we did longer interviews making use of other methods drawn from web usability testing. The #CityMash data does not contain these. We gained informed consent for participant interview recordings and our written notes to be used for presentation and data analysis at #CityMash.
#CityMash technical requirements
- You will need at least a tablet, ideally a laptop, to watch and listen to the audiovisual recordings. A smartphone screen will likely not be big enough to see what’s going on. Headphones are ideal but are not entirely necessary.
- Sharing a device with another delegate is possible. Coding together and sharing your observations and thoughts as you go in a negotiated process would provide an interesting alternative to doing this on your own.
- You will need a way of recording your coding and writing memos and any other notes. Any text editor, word processor, or pen and paper will work fine. (At Imperial College to facilitate collaborative coding, sharing, and to save time, we just write directly in our staff wiki.)
Beginning the process of open coding
Charmaz’s (2014, p. 116) guidance is that during initial or open coding, we ask:
- What is this data a study of?
- What do the data suggest? [What do they p]ronounce? [What do they l]eave unsaid?
- From whose point of view?
- What theoretical category does this specific [data] indicate?
Grounded theory textbooks often give examples of coding based on narrative such as diaries or written accounts and show example codes side-by-side with this. We are using audiovisual recordings instead, but the process is similar: listen to each statement and sentence spoken and the user’s behaviour as you go through the video and code piece-by-piece. Try to “sweep” through the data fairly quickly rather than spending too much time on each code. You will get better and faster at this as you go.
For codes themselves, try starting by writing down short analytic observations about the data as you experience it. Codes should “result from what strikes you in the data” (Charmaz, 2012) and should be “short, simple, active, and analytic” (Charmaz, 2014 p. 120 ). Remember you’re trying to be analytical about what you see, not just record what is happening.
Charmaz’s (2014 p. 120) ‘code for coding’ is:
- Remain open
- Stay close to the data
- Keep your codes simple and precise
- Construct short codes
- Preserve actions
- Compare data with data
- Move quickly through the data
Keep the facilitator’s notes alongside you and try to understand how these relate to what she saw and understood in the interview.
Don’t worry about being perfect the first time. Coding is iterative and you are allowed to go back and rework things, and make new connections between data. Initial codes are provisional, and working quickly both forces you be to spontaneous and gives more time to go back and iterate over the data again.
It is very difficult, but try to put your favourite theoretical “lens” to one side during initial coding. It’s perfectly fine to bring in these ideas later, but for open coding you are trying to spark thoughts and bring out new ideas from the data rather than apply someone else’s grand theory.
Focused coding: refining data to begin to develop theory
Our #CityMash workshop is limited in time so we will do an initial round of open coding followed by small group discussion exploring focused coding.
Focused coding is the process of analyzing and assessing your first round of codes, and as a guide it should be a reasonably fast process. You are looking for connections and relationships between codes, and comparing them with the data and with each other. Looking at particular pairs of codes, which work better as overall analytical categories? Which give a better direction in developing an overall theory from the data?
Think about how you might create a theoretical framework later about discovery user experience to help inform changes to the system. Which codes better fit the data in allowing you to do this?
Charmaz (2014, pp. 140-151.) poses the following questions to help make choices about focused coding:
- What do you find when you compare your initial codes with data?
- In which ways might your initial codes reveal patterns?
- Which of these codes best account for the data?
- Have you raised these codes to focused codes?
- What do you comparisons between codes indicate?
- Do your focused codes reveal gaps in the data?
The results of George’s analysis of our focused coding was written up into a summary report of the things we needed to concentrate on in redeveloping our Primo interface. The systems team is currently working on Primo back-end configuration and front-end design to fulfill this, and these findings will be the subject of an upcoming blog post.
Our slides from our #CityMash talk are also available.
Charmaz, K. (2008) ‘Constructionism and the grounded theory method’, in Holstein, J.A. & Gubrium, J.F. (eds.), Handbook of constructionist research. New York, NY: Guilford Press, pp. 397-412.
Charmaz, K. (2012) ‘The power and potential of grounded theory’, Medical Sociology Online, 6(3), pp. 2-15. Available at: http://www.medicalsociologyonline.org/resources/Vol6Iss3/MSo-600x_The-Power-and-Potential-Grounded-Theory_Charmaz.pdf (Accessed: 11 June 2015).
Charmaz, K. (2014) Constructing grounded theory. 2nd edn. London: Sage
Glaser, B.G. & Strauss, A.L. (1967) The discovery of grounded theory. Chicago, IL: de Gruyter
Pickard, A.J. (2013) Research methods in information. 2nd edn. London: Facet.