# Unmoderated Testing

# Phase: 🛠️ Problem Solving
Focus: Test


Time commitment: Varies widely according to test method, but always less time than moderated testing
Difficulty: Moderate
Materials needed: Goals for outcome, users to test, testing mechanism/platform
Who should participate: User experience designers, product/project owners
Best for: Gaining in-depth insights about a solution's effectiveness that cannot be gathered through passive/quantitative methods but may not require the rigor of moderated testing

# About this tool

Unmoderated testing isn't a tool in itself, but rather a collection of tools — any type of testing that doesn't take advantage of active moderation by an interviewer. A variety of these are already featured in this toolbox (and please open an issue or make a PR if your favorite has been excluded so far; this toolbox is a living document!), but examples include the following:

It may be worth considering unmoderated testing as a "middle ground" of rigor between data gathering using passive analytics and insight gathering using moderated testing.

Unmoderated testing has a variety of pros:

  • It's inexpensive in terms of recruitment costs, time spent, and effort to capture results
  • It scales to a variety of sizes with a certain degree of ease
  • It's more difficult to introduce bias in either your questions or the interpretation of your results

However, unmoderated testing also has its fair share of cons:

  • It's rarely appropriate for qualitative lines of questioning due to its inability to ask personalized followup questions
  • If you rely on standard user testing platforms like usertesting.com or Userzoom, your base of participants skews toward individuals who take tests rapid-fire in order to get paid, which can heavily influence your results
  • You miss the ability to read between the lines with nonverbal or contextual cues
  • Unless your test is extremely well-designed, you could be introducing noise based on unclear test questions or tasks without detecting there's a problem (problems become evident much more quickly in moderated testing!)

For this reason, unmoderated testing is generally much more appropriate for user testing (validating a solution by asking people whether specific solutions address their pain points, or investigating whether a solution enables users to complete a task at hand without friction) than for usability testing (evaluating how useful and usable a solution is by watching and discussing how a user interacts with it). While you can approach both user testing and usability testing using unmoderated means, usability tests are less reliable in an unmoderated context simply because people often say one thing but actually mean another.

In any case, whether you're conducting a moderated or unmoderated test, you'll want to keep a few things in mind:

  • Make sure to expressly indicate the metrics you'll be measuring before even writing a test plan or test questions. Is the key item you're seeking information on successful completion rate, time spent on a specific task, or a quantitative or qualitative measure of satisfaction?
  • Make sure you're considering both "extremes and mainstreams" (ordinary users and edge cases), both when you're writing up your test plan/questions and when you're recruiting for participants.