Data Is More than Numbers
Low-code platforms & other end-user programming or simplified implementation don't remove the need for task analysis & other deeper UX work & thus don't guarantee great usability. But they can free up budget for better UX work.
Systematically gathered qualitative data is a dependable method of understanding what users need, why problems occur, & how to solve them. “Not everything that can be counted counts, & not everything that counts can be counted.” ~ (Attributed to) Albert Einstein
Qualitative Data Isn’t Just Opinions
A fairly common objection to qualitative UX research (especially from statistically literate audiences) is that small sample sizes result in anecdotal evidence or a few people’s subjective assessments, rather than data proper. Many UXers that work in domains such as healthcare, natural science, or even just “data-driven” organizations may find that it is difficult to build buy-in to conduct small-n research in the first place; even if they are able to do the testing, it’s often hard to build credibility about the recommendations that result from the findings.
Common objections include:
- Comparisons between design options in studies with 5 or 10 users aren’t statistically significant (which is true)
- Small sample sizes mean that we cannot confidently generalize things like time on task or success rates from a small study (also true)
- Since we aren’t measuring things, our interpretations are therefore inherently subjective (indeed a potential hazard, but one that proper methods & good researchers account for)
While some of these objections are true (& are why we don’t recommend reporting numbers from qualitative studies), it’s a big jump to assert that qualitative research is anecdotal or lacks rigor. It is simply the case that qualitative research is a rather different mode of investigation.
Qualitative Research Is Rigorous & Systematic
Rigor in quantitative research is seen as being comprised of a few major attributes:
- Validity ~ is the thing we’re measuring a good representation of the thing we care about? Can our conclusions generalize beyond this experiment?
- Reliability ~ if we repeat the research, will we get similar results?
- Objectivity ~ do we have a way of ensuring that our observations aren’t clouded by our biases?
These characteristics are relatively straightforward for quantitative research, but are not easy to establish for most studies with small sample- sizes.
Social scientists Yvonna Lincoln & Egon Guba created a parallel set of characteristics for qualitative research that have become a standard way of assessing rigor:
- Credibility ~ Did we accurately describe what we observed?
- Transferability ~ Are our conclusions applicable in other contexts?
- Dependability ~ Are our findings consistent & repeatable?
- Confirmability ~ Did we avoid bias in our analysis?
We can satisfy those criteria by being systematic. That is the factor that makes the data we collect data, not anecdotes that happen by chance. If the CEO hears from a friend that the company’s app looks outdated, that’s an anecdote ~ there wasn’t a systematic process to gather that observation, it happened by chance & is only one person’s subjective opinion. If a UX researcher systematically recruits 5 participants & several of them struggle to understand the branded terms in the navigation, that is data.
Small Sample Sizes Are Fine, Depending on What You’re Looking At
But, you may be saying, What about those small sample sizes? Don’t they have an inherent sensitivity to outliers? Maybe a problem you observe is real, but rare & you might overstate its importance due to a small sample.
Once again, we can point back to a robust theoretical framework we have in UX full of evidence-based principles about how users sense, think, behave, & interact with technology. If we observe even one person having a problem that is an exemplar of a known principle, we are able to be reasonably confident that it is a real problem. Of course, we still won’t be able to say precisely how many people will encounter that problem.
If the number of people affected by a problem is a real factor that we need to consider (e.g., if the problem will be expensive to fix & will take a lot of resources), then yes, we may need to do some form of quantitative experiment to figure that out. On the other hand, it is often cheaper (& more sensible) to simply fix the design problem without quantifying just how bad it is, if we’ve identified it early in the design process.
That is one of the main reasons we have consistently recommended small sample size studies, done early (& repeated with several iterations of a design): they are a relatively inexpensive way to find & address major usability issues that we would otherwise learn about from angry customers if we shipped the product without testing. It would be a waste of time & resources to confirm a major flaw in the design with many participants, especially if we’re working on a fast-moving Agile team.
Empathy & Humanity Aren’t Easily Counted, but They Count
Last, but definitely not least, qualitative research allows us to build a real, empathetic understanding of users as human beings. When we view human interactions with technology primarily through the lens of metrics such as engagement, bounce rate, or time on task, we aren’t very concerned with the users’ well-being. (It might be in the back of our minds, but certainly not a primary consideration.) The tech industry is just beginning to reckon with the ethics of what we do & to realize that how we design our products has a real impact on the life of many, many human beings.
Moderated qualitative research requires that we engage with other humans (& even unmoderated studies still involve observing people). We typically need to build some form of rapport to get participants comfortable with expressing their inner thought process. We often discover that they experience the world differently than we do ~ in ways both small & subtle, & huge & overt. These studies provide the opportunity to empathize with them.
I don’t want to overstate the power of qualitative research here. It will not automatically generate empathy for users ~ I’ve certainly witnessed teams laughing while watching users struggle. Doing qualitative research will not fix ethical problems baked into a business model. Qualitative research certainly will not replace the critical need for just & inclusive hiring practices for your team, to ensure that decisions are made by people with a variety of backgrounds & lived experiences.
On the other hand, I also don’t want to undersell the value of the empathy built through this sort of research ~ for example, simply through noticing how frustrated one user gets & hearing them casually questioning if they are stupid because they couldn’t figure out a confusing design. That (unfortunately commonplace) reaction tells me that the problem is real & fixing it needs to be a priority, even if I don’t have a huge sample size.
Qualitative research is rigorous & systematic, but it has different goals than quantitative measurement. It illuminates a problem space with data about human experience ~ expectations, mental models, pain points, confusions, needs, goals, & preferences. Sample sizes are typically smaller than for quantitative experiments, because the goal isn’t to suggest that our sample participants will represent the whole populations proportionally; instead, we’re looking to find problems, identify needs, & improve designs.
UX research is a mixed-methods discipline because these two approaches are complementary: measuring how much & understanding why can both help us build better products, which is the main goal of any UX research.
“Good writing does not succeed or fail on the strength of its ability to persuade. It succeeds or fails on the strength of its ability to engage you, to make you think, to give you a glimpse into someone else's head ~ even if in the end you conclude that someone else’s head is not a place you’d really like to be. I’ve called these pieces adventures, because that’s what they are intended to be. Enjoy yourself.” ~ Curated Excerpt From: Malcolm Gladwell. “What the Dog Saw: And Other Adventures.” Apple Books.
Curated via Nielsen Norman Group. Thanks for reading, cheers! (with a glass of wine & book of course)
2018 Vaudoisey Creusefond Auxey Duresses
Producer: Domaine Vaudoisey-Creusefond, Cote de Beaune, Burgundy, France
"The nose of this wine evokes raspberry & red cherry alongside a distant cloud of smokiness. The palate is slender, but carries that red-fruited aromatic purity. Fine, almost subtle tannins weave a slight web while freshness reigns. It's a light but very evocative wine with a lovely echo of fullness, freshness & red fruit. The wine will also work slightly chilled." ~ 90 Points ~ Wine Enthusiast
What the Dog Saw: And Other Adventures
In this brilliant and provocative book, covering everything from criminology to ketchup, job interviews to dog training, Malcolm Gladwell shows how the most ordinary subjects can illuminate the most extraordinary things about us and our world.
Looking under the surface of the seemingly mundane, he explores the underdogs, the overlooked, the curious, the miraculous and the disastrous, and reveals how everyone and everything contains an incredible story.
What the Dog Saw is Gladwell at his very best ~ asking questions and finding surprising answers.