Interview compositions

Qualitative interviews are in essence a series of questions and answers, ping-ponged between the interviewer and the interviewee. Done well, they’re a thoughtful sequence of prompts, asked in a sensible order to elicit rich informative responses.

The work that goes into preparing an interview script is a key to conducting a good interview. But how do researchers design their interview scripts? Let’s explore the designs of a couple of HCI/UX interview scripts.

Diagram of interview script showing a script consists of topics which in turn consist of questions Interview script Question Topic

An interview script, also known as a discussion guide, consists of a list of questions. These questions are usually grouped by topic, and ordered in a specific sequence. Together they show what the interviewer is planning to cover in the interview. The reality of how the interview actually unfolds may of course differ, the interview script only outlines the interviewer’s intention.

Unlike interview transcripts, which I explored previously, interview scripts are more often shared publicly – for example in the appendices of academic publications. They’re not as comprehensive as transcripts, but they should tell us something about how researchers plan their interviews.

Developing a notation

I’m thinking of interview scripts as compositions. But unlike musical compositions, there is – as far as I am aware – no established notation for interview questions. Let’s attempt to create a notation so that we can visualise, compare, and contrast scripts.

To get a picture of a full interview script, we need to understand its questions. There are several ways to classify questions, let’s start with the very basics. At a high level, we can distinguish between two types of questions:

Open questions: these ask for a freeform answer, the interviewee can answer as briefly or extensively as they’d like, in their own words. Open questions encourage depth. Example: ‘what does a typical Monday look like for you?’ We’ll depict this type of question with a vertical line.

Small black vertical line

Closed questions: these ask for a specific answer: yes or no, a single word/number, an option from a defined list of options, etc. Closed questions can help collect survey-like data, and they can help the interviewer decide which questions to ask next. Example: ‘do you own a mobile phone?’ We’ll depict this type of question with a horizontal line.

Small black horizontal line

We can now use this notation to visualise interview scripts as paths. If an interview script contains only open questions, we’d expect a long vertical path. If an interview script contains only closed questions, we’d expect a long horizontal path. Here are the paths of six interview scripts:

Six plots of interview scripts from published academic papers ranging from 100% open questions (a long vertical line) to 36% open questions (a meandering mostly horizontal path) 36% open Zeng et al. (2017) Lau et al. (2018) Holstein et al. (2019) Tahaei et al. (2021) 54% open 100% open 58% open 67% open Sambasivan et al. (2018) Tang (2021) 48% open

The trajectories of these scripts are quite different. All of them begin with a few open questions, but beyond that their use of open and closed questions varies.

Let’s zoom in and refine our approach a bit. Rather than looking at the script as one continuous event, let’s view the different topics covered within a script as distinct paths. The script by Lau et al. (2018) consists of 82 interview questions across 11 topics. If we look at those topics separately, we get:

11 smaller staircase-like plots of topics

Next, let’s add some more detail about the nature of the question. Open questions often contain one of the classic ‘question words’, like who, what, where, when, why, and how (5WH). Closed questions, on the other hand, often start with a verb: can, do, are, etc. Let’s annotate a few of Lau et al.’s topic paths with this information:

Topic paths annotated with 5WH words or verbs next to each question line, red symbols indicate use of Spradley question types WHERE HOW WHEN DID DID HAVE DID DID HOW HOW WHAT HOW CAN IS DO WHAT HAVE HOW DOES DO DO HOW HOW ARE ARE CAN WHY HAS HOW HAS DO HAVE HAVE DID CAN DO ARE DO HOW DO DO HOW DO HOW HOW ARE HOW HOW CAN ARE HOW HOW HOW DO HOW WOULD WHEN HOW HAVE HOW HOW ARE WHAT WAS CAN WHAT DO WHAT HOW CAN DO WHAT HOW DID DID WOULD WHAT HOW HOW HOW WHAT WHERE

I’ve snuck in a third and final question categorisation too. Spradley (1979) outlines several different ethnographic question types. We’ll highlight five key ones. To simplify things, I’ve altered and grouped some of these question types.

Tour questions: ask for a tour, either in a physical space, or in someone’s head. This may also involve asking the interviewee to draw the tour. Example: ‘what does a typical Monday look like for you?’ Spradley distinguishes between various types of ‘tour’ questions (grand tours, mini tours, typical tours, guided tours, etc.), we’ll lump all of them into the same category. Notation:

Example questions: ask for one or more examples of something. Example: ‘can you give me an example of an app you use regularly?’ Notation:

Experience questions: ask for their recollection of an event or occurrence. Often elicits particularly memorable events. Example: ‘could you tell me about experiences you’ve had with receiving unwanted emails?’ Notation:

Language questions: ask for definitions, words or phrases. Example: ‘what does ‘offline’ mean to you?’ Notation:

Contrast questions: ask how two or more things differ. Example: ‘how does commuting by train compare to commuting by bicycle?’ Notation:

Exploration

Now we have a basic notation covering three question classifications: open/closedness, question words, and question types. Let’s use it to visualise a few interview scripts:

3 topic paths are mostly vertical, followed by a 4th mostly horizontal path COULD WHO WHAT WHAT WHAT WHAT WHAT WHEN WHY HOW WHEN WHAT WHAT ARE HOW HOW WHAT HOW WHAT WHO DOES DO HOW DO DO HAVE ARE DO WHICH DO WHAT WHAT DO

Sambasivan et al. (2018): 33 questions across 5 topics. The script contains a tour question at the beginning: “What’s a typical day like in your life?”. It primarily contains open questions, with the exception of the fourth topic, which narrows down on people’s specific behaviours, e.g. “Do you ever hide stuff from people around you on your phone or online?”. Note that most of these closed questions do have additional “tell me more” prompts, but for this analysis I’ve only included the first thing asked in each question (see limitations).

Five similar staircase-like paths, the first two containing several tour and experience questions WHO HOW HAVE WHAT HOW ARE ARE HOW CAN COULD CAN HOW HAVE WHAT HOW ARE ARE HOW HAVE WHAT HOW ARE ARE HOW WERE HAVE WHAT HOW ARE ARE HOW HOW HAVE WHAT HOW ARE ARE IS

Holstein et al. (2019): 38 questions across 6 topics. Especially in the beginning the script heavily relies on tour questions (e.g. “how does your team typically ​make those decisions?”) and experience questions (e.g. “[…] recall times you or your team have discovered fairness issues in one of your products?”). Holstein et al. repeat versions of the same questions throughout the interview. For the sake of consistency I’ve been strict and I’ve treated questions such as “Are there any (other) areas where you see room for improvement” as closed questions – they technically are, even though it is likely they evoke a freeform response from the interviewee. Tricky.

4 mostly vertical topic paths, followed by a 5th horizontal-only path HOW WHEN WHAT WHAT WHAT WAS WHAT IS ARE CAN WHAT WHAT WHAT WAS WHAT IS ARE CAN HOW HAS HAS ARE ARE ARE HOW

Tang (2021): 25 questions across 6 topics (defined by me, the original script does not divide the questions into topics). Tang employs a range of question types, including a tour question, experience questions, an example question, and a contrast question. The closed questions in the second half of the script are further examples of questions that are technically closed questions, but in practice may be interpreted as open questions by interviewees. For example: “Are there any differences in online meetings because more people are attending remotely [as a result of COVID]?”

Initial thoughts

So, is this useful? To be decided. The exercise of dissecting interview scripts and classifying questions has certainly made me reflect on interview practices. The notation highlights some potential patterns. For example, the scripts I looked at fairly consistently included tour questions at the start of the interview. Some interviewers seem to start with open questions and then go through a series of closed questions towards the end of the interview.

I’m fascinated by the number of closed questions in some of the scripts I came across, having always been taught interviews should primarily focus on open questions. When I presented bits of these explorations at a Danish UX meeting, someone remarked that this may be the result of different research traditions coming together in HCI – I suspect they’re right.

The notation helps paint a high level picture of the design of the interview script, but it feels like there is much more to uncover. And there are definite limitations too.

Limitations

So many!

Most importantly, all three question classifications depend heavily on how the researchers phrase their interview questions. For example, Holstein et al. (2019) ask “Are there any (other) factors that make it difficult to make these kinds of changes?”. This is a closed question without a 5WH question word. Presumably, however, they would like the interviewee to expand on what these factors are. If so, the question could be rephrased as “What factors make it difficult to make these kinds of changes?”, which is an open question with a question word. And if they’d ask “Can you give examples of factors that make it difficult to make these kinds of changes?” it would even count as an example question type. The subtleties of language. Should the notation convey the (presumed) intention of the interviewer, or rigidly reflect the semantics of the script? I’ve opted for the latter, but it feels imperfect.

Next steps

I plan to continue experimenting with this notation, to see if it helps me understand others’ approaches to interviewing better. I purposely kept the notation simple, so that it’s possible to make quick sketches of the designs of other types of interviews as well: short interrogation-like political interviews, in-depth podcast interviews, etc.

Photo of a notebook with pen drawn sketches of interviews paths, including from interviewers Emily Maitlis, Christiane Amanpour, and Anna Sale. Highlighted in red are statements, which are open nor closed questions – particularly prominent in Maitlis interview.

Looking beyond research interviews has instantly revealed a missing element in the notation: many interviewers rely on statements (e.g. “You were born in London”) rather than questions, so I’ve added a diagonal line for those. And, as expected, lots of closed questions in the more holding-people-to-account-type interviews: “Will you admit to…”, “Do you agree to…”, “Can the public trust that…”.

I imagine I’ll uncover more gaps and limitations as I continue to use this notation. It may develop and evolve into something else. I’m also still exploring other ways of classifying and visualising interview questions, so I’m very open to any suggestions and references you may have.

Work-in-progress.


Notes

  • For this exploration I selected a few interview scripts from highly cited papers at HCI conferences and journals. All have been cited between 80 and 800 times – a flawed indicator of ‘this must be considered good research’. Coincidentally, most of these scripts focus on the theme of privacy.
  • Many interview scripts contain multiple questions or prompts per numbered question. For example: a closed question followed by “Tell me more” once they have answered. For this analysis I’ve only considered the first question. A better approach may be to split these out and treat them as a closed question followed by an open question.

References

Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems

Lau, J., Zimmerman, B., & Schaub, F. (2018). Alexa, are you listening? Privacy perceptions, concerns and privacy-seeking behaviors with smart speakers. In Proceedings of the ACM on Human-Computer Interaction, 2 (CSCW)

Sambasivan, N., Checkley, G., Batool, A., Ahmed, N., Nemer, D., Gaytán-Lugo, L. S., Matthews ,T., Consolvo, S. & Churchill, E. (2018). “Privacy is not for me, it’s for those rich women”: Performative Privacy Practices on Mobile Phones by Women in South Asia. In Fourteenth Symposium on Usable Privacy and Security (SOUPS 2018)

Spradley, J. P. (1979). The ethnographic interview. Waveland Press

Tahaei, M., Frik, A., & Vaniea, K. (2021). Privacy champions in software teams: Understanding their motivations, strategies, and challenges. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems

Tang, J. (2021). Understanding the telework experience of people with disabilities. In Proceedings of the ACM on Human-Computer Interaction, 5 (CSCW1)

Zeng, E., Mare, S., & Roesner, F. (2017). End user security and privacy concerns with smart homes. In Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017)