Looking out for biases in usability testing

Reflections on researcher and participant blind spots for more reliable findings

Looking out for biases in usability testing
Photo by Jr Korpa / Unsplash

While talking to recruiters, I've noticed that more and more product teams are looking for UX writers who can help prepare test scripts for usability testing.

Since this requirement might become more common in the future, I thought it would be helpful to summarize some of the biases we need to look out for when conducting usability tests.


Researcher bias

Let's focus on researcher bias and see possible ways to avoid it.

Confirmation bias: the researcher uses answers and insights from the test to confirm their pre-determined hypothesis.

Culture bias: the researcher interprets test results based on their background, expectations, values and preferences.

Wording bias: the researcher frames questions in a way that prompts participants to give a specific response. For example, leading questions could be inadvertently used to guide users towards the desired answer.

Here are some tips to prevent these biases from influencing the test results:

  • Know what biases exist. Note down your assumptions regarding the test and its results.
  • Ask neutral, open-ended questions. Be clear and use simple words. Pay attention to terms and structures that might suggest the answer. For example, ask "What was the task like?" instead of "How DIFFICULT was the task?"
  • Involve colleagues from varied backgrounds and who are familiar with different research methods. They can help you spot inconsistencies and issues you might have missed.
  • As an additional data point, conduct competitor audits and find out what users like and dislike about similar brands.

Participant bias

Users become more self-aware when they know they're being watched, and might adjust their behavior accordingly (Hawthorne effect).

"It feels like these researchers are observing and analyzing my every move. I'll need to do my best for the test to succeed.

Users know that, if the researcher asks them to complete a certain action, it means that it's in fact possible to complete it. This might affect the way they approach the task at hand.

"Since you've asked me to do it… it means it can be done."

When users perceive a question matters to the researcher, they'll try to give them an opinion anyway (even if they don't know too much about the topic).

"Since you've asked me to do it… it must be important."

Instead of giving a genuine answer, users will say what they think is considered the 'right' answer (social desirability).

"I'll tell you what I think you want to hear."

Some ways to avoid participant bias include:

  • Clarifying that there are no right or wrong answers. What the researcher cares about is understanding what users actually think and feel.
  • Never showing frustration or disapproval toward users and their answers. The researcher wants to be friendly and create a safe environment where participants feel they can be truthful. There will be no repercussions if they share negative feedback on our product.

I think it's also important to understand where user nervousness may come from.

  • Is it because they're being observed?
  • Because they're unfamiliar with the product?
  • Because they're uncomfortable providing negative feedback?

As we keep these aspects in mind, we need to try and address questions and objections upfront, while creating a safe space for users to be honest. No judgment, no negative repercussions (regardless of the feedback) and no disappointed reactions to the participants' answers. We won't always get a chance to talk to all participants directly to reassure them, but we can anticipate some of the hiccups by learning about the researcher's own biases.

What I've found is that, once I've narrowed down my participant list to the most relevant candidates, I'm usually left with people who are passionate about the product and actively use it. They take the test very seriously and usually point out interesting aspects I hadn't thought about in the first place.

Maybe these same users would like to be involved in future studies or receive some kind of benefit from the brand. They don't want to hurt their chances by sharing negative feedback.

Setting the tone upfront really helps when it comes to minimizing these biases.


CTA Image

I'm Elisa, an Italian content designer and translator at heart who believes good design is service. This is where I document my life in UX and writing.

Read more

Let’s talk words

Get in touch on LinkedIn to talk about all things UX writing, content design and localization.

Contact me