# Tree Testing

# Phase: 🛠️ Problem solving
Focus: Test

IN BRIEF

Time commitment: 30 minutes to 1 hour per user
Difficulty: Easy/moderate
Materials needed: Tree of hierarchical menu items, tasks for testers to execute, users, location (physical or virtual), interviewer/notetaker/notetaking tools (if moderated), testing mechanism/platform (if unmoderated)
Who should participate: User experience designers, information architects, product/project owners
Best for: Evaluating a product's navigational hierarchy

# About this tool

Tree testing is a quick and reasonably easy means of evaluating the information architecture of a product or service, particularly in terms of testing its navigational hierarchy. It's an excellent companion to card sorting as a means of testing whether you've implemented the topical priorities you discovered in a card sort in a way that's intuitive to the user. It's also a good sanity check before going down the road of designing page layouts or even navigational menus, because it's quick and inexpensive (in terms of effort) to test. To execute a tree test ...

  1. Present your user with a tree of information that's representative of how your product's menu or navigational structure will be laid out. For thoroughness, consider testing several different tree versions — using variants on labels and/or on positioning of items — on different user groups to make sure that you're isolating your evaluation of the effectiveness of both labels and positions. Your trees could take the format of spreadsheets, paper prototypes, or screens in a testing tool like Treejack or Userzoom.
  2. Give the user a list of navigational tasks to carry out using the tree they've been given. Depending on your test goals, these tasks could evaluate aspects such as:
    • Finding the most important item in the navigation (and does their judgment about this agree with yours?)
    • Determining whether new categories or items fit logically into the rest of the existing tree
    • Noticing and weeding out duplicate items
    • Mistaking one item for another
    • Pointing out miscategorizations for items that could potentially belong in multiple categories
    • Whether different versions of a question focusing on different aspects of a task yield the same navigational result
  3. If you're working in a moderated format, follow up with interview questions on why users took the actions they did, and what they were thinking as they did so.