Research

Benchmark Creation Protocol

Benchmarking shows your current state, letting you track progress. This exercise sets initial metrics for usability, satisfaction, or other UX aspects. It's the base for measuring improvements later.

Duration
3 hours
Group Size
12-20
Category
Research
Difficulty
Easy
Energy
Medium

Objectives

Participants will:

  • Select relevant UX metrics.

  • Design standard tasks for solid data.

  • Build repeatable processes.

  • Get baseline measurements for comparison.

  • Plan regular benchmarks and analysis.

Outcomes


  • Baseline metrics for tracking progress.

  • Standard process for future comparisons.

  • Clear view of current UX performance.

  • Basis for measuring design ROI.

Step-by-Step Instructions


  1. Metric Selection (30 minutes): Define what success means. Choose key metrics: success rate, time, errors, satisfaction. Use standard measures (SUS, NPS, CSAT) when you can. Define your own if needed.

  2. Task Definition (30 minutes): Pick tasks that represent real use. Make sure tasks won't change soon. Clearly define what success looks like. Write realistic task scenarios.

  3. Protocol Development (45 minutes): Write a standard test script. Set data collection steps. Create participant criteria. Decide between remote or in-person testing.

  4. Pilot Testing (45 minutes): Test the benchmark with 3-5 people. Improve tasks and steps. Check metric collection. Finalize the process.

  5. Documentation (30 minutes): Document the whole process. Create analysis templates. Plan how you'll compare results. Schedule future benchmarks.

Unlock Step-by-Step Instructions

Join 2,500+ facilitators who use Workshopr to plan better workshops — free during beta.

Facilitator Tips

Consistency is key—use the exact same process each time. Document everything so others can repeat it. Standard metrics help compare results. Plan how often to benchmark (quarterly, per release).

Consider these metrics:

  • Task success rate.

  • Time on task.

  • Error rate.

  • System Usability Scale (SUS).

  • Net Promoter Score (NPS).

  • Custom satisfaction measures.


In my experience, getting the task scenarios just right takes the most iterations. Don't be afraid to adjust them after the pilot.

Pre-Work

For Facilitators

  • Review participant profiles and expectations
  • Prepare all materials and supplies
  • Test technology and room setup

For Participants

  • Complete pre-session survey
  • Review background materials
  • Prepare examples or case studies

Unlock Pre-Work Requirements

Join 2,500+ facilitators who use Workshopr to plan better workshops — free during beta.

Materials Required


  • Stopwatch or timer software.

  • Screen recording software.

  • Survey tool for questionnaires.

  • Spreadsheet templates for analysis.

  • Task scenario scripts.

  • Standard questionnaires (SUS, etc.).

  • Data collection forms.

  • Baseline report template.

Unlock Materials Required

Join 2,500+ facilitators who use Workshopr to plan better workshops — free during beta.

Resources & Templates

Discussion

Loading comments...