A Practical Guide to Planning UX Research Effectively
- Robinson Marroquin

- Nov 14
- 10 min read

1. Overview:
Running a good usability study means following a clear process. A solid plan keeps the research focused and ensures the results are useful for design decisions.
Types of Usability Tests:
User Acceptance Testing (UAT)
UAT checks if the product does what it was meant to do. It focuses on whether the experience matches user and business needs.
Quality Assurance (QA)
QA testing looks for bugs and checks if everything works as expected. For example, if a button doesn’t lead to the right page, the UX team reports it to developers to fix before launch.
Accessibility Evaluation
Accessibility testing makes sure the product meets accessibility standards. It can be part of QA testing and helps ensure all users can access and use the product comfortably.
Research Plan Structure:
Research plans help teams understand users’ problems so they can design better solutions. A good plan keeps the study organized, clear, and aligned with business goals.
Introduction
Title: A short phrase describing the study focus
Author: Name, role, and contact information
Stakeholders: Key people involved and their roles
Date: Update each time the plan changes
Project background: Why this research is being done
Research goals: What problems the study aims to solve and how findings will guide design decisions
Research Questions
What do you want to learn?
Aim for around 5 questions (no more than 7–10).
Key Performance Indicators (KPIs)
Decide how you’ll measure success.
Examples: time on task, navigation vs. search usage, error rates, drop-off rates, conversion rates, or System Usability Scale (SUS) scores.
Methodology
Explain how data will be collected and analyzed.
Include enough detail so other researchers could repeat the study or build on it.
Participants
Describe who will take part and why they were chosen.
If including specific groups (e.g., users with different abilities), explain their role in the study.
Script
Include the questions and tasks participants will go through during the study.
Writing the Introduction:
The introduction sets the tone for the research. It usually includes:
Project background to explain why the study is happening
Research goals to clarify what the team wants to learn
Research questions to focus the study
Together, these elements help:
Define the main research goals
Identify who’s affected by the design
Clarify deliverables
Ensure research data is accurate and actionable
Example:
Introduction:
Title: Creating CoffeeHouse Ordering App
Author: Ali, UX Researcher, ali@coffeehouse.design
Stakeholders: CoffeeHouse customers; Gael Esparza – CTO; Linda Yamamoto – VP of Design
Date: 12-14-2020
Project background: We’re building a CoffeeHouse app to make group coffee orders faster. Some customers order for multiple people, and doing it one by one takes too long.
Research goals: Find out if collaborative ordering actually saves time.
Research Questions:
How long does it take for 4–5 people to place a group order?
What steps do users follow when ordering as a group compared to ordering alone?
Why the Example Works:
The title, author, stakeholders, and date give clear context and keep everyone aligned.
The project background explains the “why” without unnecessary detail.
The research goals clearly define what the team wants to learn.
The research questions are actionable, specific, neutral, and linked to either qualitative or quantitative methods.
Before finalizing your plan, ask yourself:
Does it have a clear title, author, stakeholder list, and updated date?
Is the background short but informative?
Do the research goals explain what you want to learn and why?
Are the research questions actionable, specific, unbiased, and tied to measurable methods?
2. KPIs – Key Performance Indicators:
Imagine your manager asks, “How did your research study go?” Your answer should be backed up by measurable results. That’s where Key Performance Indicators (KPIs) come in. KPIs are metrics that show how well a product or prototype performs during usability testing.
Choosing the right KPIs helps you prove the impact of your design decisions and communicate findings clearly to your team and stakeholders.
Common UX KPIs:
1. Time on Task
This measures how long it takes a user to complete a task, like filling out a form or making a purchase.
How to measure: Start a timer when the user begins the task, and stop it when they finish.
Why it matters: Shorter times usually mean the design is easier to use.
2. Navigation vs. Search Usage
This shows how many people use navigation menus compared to using the search bar.
How to measure: Count clicks or taps on navigation elements vs. search queries.
Why it matters: It reveals how users prefer to find content, helping you balance navigation and search options.
3. User Error Rates
This identifies where users make mistakes, like clicking the wrong icon or forgetting to check a box.
How to measure: Track errors during usability sessions.
Why it matters: Errors highlight confusing areas in the design that need improvement.
4. Drop-Off Rates
This tracks how many users abandon a task before completing it.
How to measure: Count the number of participants who don’t reach the goal.
Why it matters: High drop-off rates can indicate frustration, confusion, or a complicated flow.
5. Conversion Rates
This measures the percentage of users who complete a desired action, like finishing a checkout flow.
How to measure: Divide the number of users who complete the action by the total number of participants.
Why it matters: Higher conversion rates usually mean your design is effective.
6. System Usability Scale (SUS)
SUS is a quick survey with 10 statements where users rate how easy a product is to use.
How to measure: Ask users to agree or disagree with statements like “I thought the app was easy to use.”
Why it matters: SUS gives you a clear usability score that can be compared over time.
7. Net Promoter Score (NPS)
NPS measures how likely users are to recommend your product.
How to measure: Ask “Would you recommend this product to a friend?” on a 0–10 scale.
Promoters: 9–10 (likely to recommend)
Passives: 7–8 (satisfied but not enthusiastic)
Detractors: 0–6 (may discourage others)
Subtract the percentage of detractors from promoters to get the NPS.
Why it matters: A positive NPS shows that users are satisfied with your product experience.
Choosing the Right KPIs:
Not every KPI fits every project. Pick the ones that match your research goals and will give you the clearest insights. For example:
Example:
Introduction
Title: Creating CoffeeHouse Ordering App
Author: Ali, UX Researcher, ali@coffeehouse.design
Stakeholders: CoffeeHouse customers; Gael Esparza – CTO; Linda Yamamoto – VP of Design
Date: 12-14-2020
Project background: We’re creating a CoffeeHouse app to make group orders faster and more efficient.
Research goals: Find out if collaborative ordering actually saves time.
Research questions:
How long does it take for 4–5 people to place a group order?
What steps do users follow when ordering as a group compared to ordering alone?
Selected KPIs
Time on Task
User Error Rates
Conversion Rates
These KPIs align perfectly with the goals of the study:
Time on task shows whether group ordering is faster than individual ordering.
User error rates highlight where users get stuck or confused.
Conversion rates reveal how many people actually finish the group checkout process.
By tracking these together, you get a full picture of how the experience performs. For instance, if time on task is high, error rates are high, and conversion rates are low, it’s a strong sign that the process is confusing and needs redesign.
Self-Check Questions for KPI Selection:
When reviewing your research plan, ask:
Do these KPIs clearly measure progress toward research goals?
Do they reveal meaningful insights into user behavior?
Can they be turned into actionable feedback for design improvements?
If you can answer “yes” to all three, your KPIs are well chosen. If not, revisit your goals and adjust the KPIs accordingly.
Key Takeaways:
KPIs help you turn observations into measurable insights. Choosing the right KPIs gives you a clear way to evaluate design performance, communicate results, and guide future improvements. The more accurately you track how users interact with your product, the more confident you can be in your design decisions.
3. Methodology:
The methodology explains how you’ll run your research, collect data, and analyze the results. A well-defined methodology ensures your study is structured, transparent, and easy to repeat.
Understanding Research Methods:
There are two main types of research methods:
Primary research: Data you collect yourself.
Examples: interviews, surveys, usability studies, competitive audits.
Secondary research: Information gathered from existing sources.
Examples: industry reports, analytics data, published studies.
You’ll also choose between qualitative and quantitative methods:
Qualitative research focuses on understanding people’s experiences and reasons behind their actions.
Quantitative research focuses on measurable data like numbers, percentages, or time.
For UX, qualitative research is especially useful during the early stages of a project to learn about users’ needs, challenges, and behaviors before creating or refining designs.
Usability Studies:
Usability studies are one of the most common primary, qualitative research methods. They involve testing your design with real users to see how well it works in practice. During a usability study, participants complete specific tasks while researchers observe their behavior, listen to their feedback, and take notes. The sessions are usually recorded so the team can review them later for deeper insights.
Example:
Introduction
Title: Creating CoffeeHouse Ordering App
Author: Ali, UX Researcher, ali@coffeehouse.design
Stakeholders: CoffeeHouse customers; Gael Esparza – CTO; Linda Yamamoto – VP of Design
Date: 12-14-2020
Project background: We’re creating a CoffeeHouse app to make group coffee orders faster and smoother.
Research goals: Understand if collaborative ordering actually saves time.
Research questions:
How long does it take for 4–5 people to make a collaborative group order?
What steps do users follow when ordering as a group compared to ordering alone?
KPIs: Time on task, User error rates, Conversion rates
Methodology
Type of study: Unmoderated usability study
Location: United States, remote (participants complete the study from home)
Date: March 1–5, 2021
Session length: 5–10 minutes, followed by a SUS questionnaire
Compensation: None
Participants
People who place group coffee orders at least twice a month, either for work or social events
Mix of genders: 2 male, 2 female, 1 non-binary
Age range: 20–75
Includes 1 participant using assistive technologies (keyboard and screen reader)
Incentive: $10 CoffeeHouse gift card
Why This Methodology Works:
This example clearly explains:
Type of study: Unmoderated, meaning participants go through tasks on their own without a facilitator.
Location: Remote testing, which is common for usability studies.
Dates and length: Helps keep scheduling clear and manageable.
Participant details: Age, gender, behavior patterns, and accessibility considerations give context to the results.
Incentives: Encourages participation and shows respect for their time.
The more clearly you define these details, the easier it is for others to understand, replicate, or build on your study in the future.
Self-Check Questions for Methodology & Participants
When reviewing your methodology and participants section, ask yourself:
Have I described the type of research, location, dates, and length clearly?
Is it clear whether the study is moderated or unmoderated?
Have I explained who the participants are, why they were chosen, and what makes them relevant?
Did I include incentives and accessibility considerations?
If you can answer “yes” to all of these, your methodology is well structured.
Key Takeaways:
A strong methodology sets the foundation for meaningful research. It gives your study structure, makes your findings more credible, and helps others replicate or trust your process. Clearly describing your participants ensures that the study captures a realistic and inclusive range of perspectives.
4. Script:
A script is the discussion guide for your usability study. It includes the tasks, interview questions, and follow-ups you’ll use with participants. A clear script keeps sessions consistent, helps reduce bias, and ensures you collect meaningful insights. Scripts are based on your research questions and KPIs, so every question and task should directly support your study goals.
Why Scripts Matter:
A well-written script helps you:
Understand what users are trying to do and how they think
Keep sessions structured and unbiased
Ask the same questions across participants for reliable data
Focus on observing behavior rather than improvising mid-session
Avoiding Common Biases:
When writing your script, it’s crucial to keep your questions neutral. Here are five common biases to watch for:
1. Confirmation Bias
What it is: Focusing on evidence that supports what you already believe.
How to avoid:
Test with 5–8 participants to get a variety of perspectives.
Pay attention to feedback that challenges your assumptions.
2. Leading Questions
What it is: Questions that push participants toward a specific answer.
Example:
❌ “Is the product easy to find under the blue tab?”
✅ “How did you find the product you wanted to buy?”
How to avoid:
Use open-ended questions.
Ask participants to think aloud as they complete tasks.
Avoid agreeing or reacting strongly to their answers.
3. Friendliness Bias
What it is: Participants agreeing with you to keep things pleasant.
How to avoid:
Emphasize honesty at the start.
Keep a neutral, curious tone throughout the session.
4. Social Desirability Bias
What it is: Participants giving answers they think are “socially acceptable” rather than honest.
How to avoid:
Run 1:1 interviews to make participants comfortable sharing real opinions.
Remind them their feedback is confidential.
5. Hawthorne Effect
What it is: People change their behavior when they know they’re being observed.
How to avoid:
Create a relaxed environment.
Make small talk before starting.
Reassure participants there are no “right” or “wrong” answers.
Example:
Introduction:
Title: Creating CoffeeHouse Ordering App
Author: Ali, UX Researcher, ali@coffeehouse.design
Stakeholders: CoffeeHouse customers; Gael Esparza – CTO; Linda Yamamoto – VP of Design
Date: 12-14-2020
Project background: We’re building a CoffeeHouse app to make group coffee orders faster.
Research goals: See if collaborative ordering saves users time.
Research questions:
How long does it take for 4–5 people to make a group order?
What steps do users follow when ordering as a group vs. individually?
KPIs: Time on task, User error rates, Conversion rates
Script:
Session Intro
Ask for consent to record audio and video.
Explain there are no right or wrong answers.
Encourage participants to ask questions.
Explain that their feedback will help improve the product.
Warm-Up Questions
Do you live near many coffee shops?
Do you have a favorite coffee shop?
How many times a week do you order coffee?
Do you usually order just for yourself or for a group?
Can you describe a typical day for you?
Tasks and Follow-Ups
Prompt 1:
Task: Open the CoffeeHouse app and customize a drink order for yourself.
Follow-up:
How easy or difficult was it to customize your drink?
What worked well? What was confusing?
Prompt 2:
Task: Imagine I asked you to “start a new group order.” What would you do?
Follow-up:
Try it now.
Did anything feel unclear?
Prompt 3:
Task: From the group order screen, add your custom drink and multiple other drinks, then proceed to checkout.
Follow-up:
How was the process of adding multiple drinks?
What was easy or challenging?
Prompt 4:
Task: Complete the checkout for the group order.
Follow-up:
How did the payment process feel?
How do you feel about the time it took?
Prompt 5:
Task: Share your overall thoughts on the CoffeeHouse app.
Follow-up:
What did you like?
What didn’t work well for you?
Why This Script Works:
Starts with an introduction to set expectations and build comfort.
Warm-up questions gather useful context and ease participants into the session.
Tasks are open-ended so participants can show their natural behavior.
Follow-up questions are neutral and designed to uncover reasons behind their actions, not lead them to specific answers.
The script is aligned with KPIs, making it easier to analyze results later.
Self-Check Questions for Scripts
When reviewing your script, ask yourself:
Does it start with a clear, professional introduction?
Are all questions unbiased and consistent across sessions?
Do tasks and follow-ups encourage open discussion?
Does the script connect directly to your research goals and KPIs?
If you can answer “yes” to these, your script is ready to use.
Key Takeaways
A well-structured script ensures your usability sessions are consistent and insightful. Neutral, open-ended questions help reveal real user behavior, while thoughtful tasks give you the data you need to make informed design decisions. With a good script, your study becomes focused, repeatable, and far more valuable.
Sources:
The content in this article is adapted and summarized from the Google UX Design Certificate – UX Research Modules © , offered through Coursera.


Comments