Unflattening practice

2D vector image of an origami upper body, much like our view of research participants in moderated video calls
Origami research participant (instructions)

The research process is in many ways at odds with the demands of industry in a capitalist society. A glance at the current UX Research practice shows a wide array of compromises made to deliver rapid ‘useful enough’ insights.

In the name of efficiency, many UX researchers have accepted – or even championed – the flattening of our research practice. In just the last few years, several technological developments have shifted what we consider to be ‘standard practice’, impacting the fidelity of our methods and data.

A couple of examples:

From in-person research to remote research

While remote research sessions have been around for years, COVID fast-tracked remote sessions becoming the norm. The efficiency is significant: global reach at the press of a button, tools that enable reliable and easy recording, reduced need for travel and high tech labs. The cost is significant too: a loss of contextual information, limited non-verbal cues, lower levels of rapport. Moderated research, flattened.

From manual transcription to auto-generated transcription

Previously a large chunk of analysis time was taken up by the manual transcription of recorded research sessions. The type of work now frequently described as tedious. It was. And also invaluable for deeper reflection and analysis. Whether handwritten, or outsourced to a professional transcription service, the resulting transcript was richer than the auto-generated transcripts provided by the current video calling and research analysis tools – which are devoid of humanness: pauses, hesitation, emphasis, emotion, and non-verbal communication have all been lost. Qualitative data, flattened.

From fully human to LLM-supported analysis and synthesis

More recently, we are seeing a shift towards LLM-supported analysis and synthesis. Various dedicated qualitative research tools offer automated summaries, clustering, tagging, insights, and reporting. General purpose LLMs offer all of that and more, if instructed well.

Practice is tautological: I learned to write in order to write,
and in order to write better, I have to keep writing,
which is the only way to get better at writing.”

Kate Wagner in The Own Wood Woodshed

Many of the analysis and synthesis features emphasise time-saving and efficiency. While this can be great, we should be conscious that such a time reduction also means a reduction in practice. Letting data mentally simmer, compost, and distill is practice: practice in listening, noticing, connecting, thinking, storytelling. If we do not practise deep thinking as much as we used to, what do we risk? Skills, flattened?

(don’t get me started on how mainstream research tools enable methodological flattening by leaving out essential functionality such as randomisation and hierarchical tagging taxonomies)

For each of the above points you can (rightly) argue that this is not how it has to be: you can still do in-person research, you can still replay sessions and create richer transcripts, you can still manually analyse and synthesise. Is that how it works though? Or have the technological changes also shifted organisational expectations? In reality, have travel budgets shrunk, timelines compressed, layoffs hit?

Unflattening

I am – as of now – sceptical about several aspects of generative AI, including its environmental impact and humans’ ability to distinguish comprehensible word sequences from sense. However, it’s too easy to be dismissive. It is plausible that we are experiencing, or will be experiencing, paradigm-shifting developments.

Assuming we are living through a paradigm-shift, how come our first response is to use this technology to flatten our practice further? Auto-generated screeners, auto-generated discussion guides, auto-generated themes, auto-generated summaries, auto-generated dashboards, auto-generated reports, auto-generated slides. Synthetic ‘users’.

save time”, “saving time”, “reduction in time spent”, “decreasing the time to insights”, “save us so much time”, “time savings”, “create time”, “do more with less time”, “find answers quicker”, “work quickly”, “faster”, “do everything a little faster”, “move faster”, “moving so much faster”

A sample of quotes from speakers talking about AI’s time-saving benefits at the AI and UXR day during the Learners Research Week

Efficiency as the be all end all. It feels unimaginative. A desire for minimum viable effort, not maximum viable insight. Can we not embrace these developments to reverse the flattening? Better methods, richer data, stronger collaborations, sharper thinking?

Imagine the possibilities if we’d focus on support for the things that are particularly hard to do. The things that will make us better practitioners. The doors that may open if we could employ technology to make knowledge from adjacent disciplines more accessible. The translation work AI could do to make the exchange of insights between academia and industry easier. Think of the depth next-generation video diary studies could have if we’d successfully combine computer vision and personalised dynamic follow-ups. Think of the value interactive session practice and scenario planning could add to developing our moderation skills.

Where is the ambition to enrich our research practice?