# A/B test

# Phase: 🛠️ Problem Solving
Focus: Test

IN BRIEF

Time commitment: A few hours to a day to create the test; time to test depends on number of participants and testing mechanism
Difficulty: Easy
Materials needed: Two (or more) versions of an artifact to be tested, test plan, users, interviewer/notetaker/notetaking tools (if moderated), testing mechanism/platform (if unmoderated)
Who should participate: User experience designers, visual designers, product/project owners
Best for: A quick means of choosing between two similar options when each is at a level of close or moderate detail

# About this tool

A/B testing is used to compare two different versions of a design, and can be used at any stage of the design process once you have a robust enough prototype to get your point across — this could be anything from a wireframe or low-fidelity prototype to a slightly different version of an already-live web page.

In this type of test, you create two different prototypes and test each version on a different set of users. This might mean testing something like different header or button copy, a slightly different layout of the same information, rearranging navigational elements, or different positioning of a key piece of text. Then, you design a test plan with tasks intended to test the performance of your variable; for example, if you're testing the placement of navigational elements, ask the users specifically to interact with those navigational elements in a way that measures their ability to find them easily.

Because you're comparing apples to apples and test tasks are usually very simple, A/B testing is a good fit for unmoderated testing. However, if you're interested in gleaning data on why the user did what they did when they executed your task, consider moderated testing in which the initial task prompt is followed up with an interview.

# What if you need to test more than one thing?

Testing variations on more than one section/feature of a product (multivariate testing) is possible if your number of variations is small, but it's more complicated to set up than regular A/B testing and the results are more prone to statistical noise. Unless time or budget strongly indicates a need to undergo multivariate testing, it's often easier from both a planning and an interpretation perspective to separate your test plan into several A/B tests.

However, if you want to test more than two versions of the same thing — such as three options for a button color — an "A/B/C" test is still a relatively easy prospect, given you have a large enough sample size.