Survey Best Practices

How to write survey questions that get honest answers

Most survey questions are accidentally designed to get lies. Learn the psychology and techniques behind writing questions that elicit truthful, actionable feedback from your users.

Ask Users Team
Product & Research
October 26, 2025
12 min read
Comparison of good and bad survey question examples

Here's an uncomfortable truth: most surveys are designed to get the answers you want to hear, not the answers you need to hear. It's rarely intentional, but the way you phrase a question can completely bias the response. The good news? Once you understand the psychology behind honest answers, you can write questions that cut through politeness and social pressure to get real, actionable insights.

Why most survey questions fail

Before we dive into what works, let's talk about what doesn't. Understanding why questions fail is just as important as knowing how to write good ones.

Bad questions lead to:

  • False positives: People saying they like something when they don't
  • Vague responses: Answers that sound good but tell you nothing
  • Survey abandonment: Confusing questions make people give up
  • Misleading data: You make decisions based on garbage input
The problem: Your users want to be helpful. They'll try to answer even badly worded questions. But the answers won't reflect reality—they'll reflect what they think you want to hear or what makes them look good.

The psychology of honest answers

People lie in surveys. Not maliciously, but unconsciously. Understanding these psychological biases is the first step to writing questions that overcome them.

Social desirability bias

People want to present themselves in the best possible light. They'll overreport good behaviors (exercising, reading) and underreport bad ones (junk food, procrastination).

The fix: Normalize potentially embarrassing answers and make responses anonymous.

Acquiescence bias

People tend to agree with statements regardless of content. They'll say "yes" more often than "no" when uncertain.

The fix: Mix positive and negative framing. Avoid yes/no questions when you need nuanced feedback.

Leading and loaded questions

Questions that suggest a "correct" answer bias responses before the person even thinks about their real opinion.

The fix: Use neutral language and balanced response options.

The anatomy of a great survey question

A well-crafted survey question has five key characteristics:

1. Specific and clear: No room for misinterpretation

2. Unbiased and neutral: Doesn't push toward any answer

3. Single-focused: Asks one thing at a time

4. Appropriate scale: Response options match the question

5. Contextually relevant: Makes sense in the moment it's asked

Good vs. bad: Real examples that show the difference

Let's get practical. Here are real-world examples of bad questions and how to fix them.

Example 1: The leading question

Bad: "How much do you love our new dashboard design?"

Why it fails: Assumes they love it. What if they hate it? This question makes negative responses feel wrong.

Good: "What's your opinion of the new dashboard design?"

Why it works: Neutral framing. No assumption about their feelings. Opens the door for honest feedback.

Even better: "How would you rate the usability of the new dashboard?" (Scale: Very difficult - Very easy)

Why this is best: Specific aspect (usability), neutral language, clear scale with labeled endpoints.

Example 2: The double-barreled disaster

Bad: "How satisfied are you with our customer service and product quality?"

Why it fails: What if someone loves your product but hates your support? They can't answer honestly because you're asking two questions disguised as one.

Good: Split into two questions:

  • "How satisfied are you with our customer service?" (Scale: Very dissatisfied - Very satisfied)
  • "How satisfied are you with our product quality?" (Scale: Very dissatisfied - Very satisfied)

Why it works: Each question addresses one specific aspect. You get actionable data about what needs improvement.

Example 3: The vague time-waster

Bad: "How was your experience?"

Why it fails: Too broad. Experience with what? When? People will give vague answers to vague questions.

Good: "How easy was it to find the feature you were looking for?"

Why it works: Specific aspect (findability), specific context (the current session), measurable (easy/difficult scale).

Example 4: The jargon trap

Bad: "How would you rate our API's RESTful architecture implementation?"

Why it fails: Unless your audience is exclusively developers, most people won't understand this. Even developers might interpret it differently.

Good: "How easy is it to integrate our service with your application?"

Why it works: Focuses on the outcome (ease of integration) rather than technical implementation details.

Example 5: The loaded question

Bad: "Would you prefer the cheap, basic plan or the premium, feature-rich plan?"

Why it fails: Loading one option with negative words (cheap, basic) and the other with positive (premium, feature-rich) obviously biases the response.

Good: "Which plan best fits your needs?" (Options: Starter plan, Professional plan, Enterprise plan)

Why it works: Neutral language, focus on fit rather than quality judgment.

Example 6: The answer scale mismatch

Bad: "How often do you use our product?" (Options: Yes / No)

Why it fails: The question asks about frequency, but the answers are binary. Total mismatch.

Good: "How often do you use our product?" (Options: Daily / Weekly / Monthly / Rarely / Never)

Why it works: Answer options actually match what the question asks. You get useful frequency data.

The 7 deadly sins of survey question writing

Avoid these common mistakes and your survey quality will skyrocket:

1. Leading questions

Questions that push respondents toward a particular answer.

Don't: "Don't you think our new feature is innovative?"
Do: "What's your opinion of the new feature?"

2. Double-barreled questions

Asking about two things in one question.

Don't: "Is our product fast and reliable?"
Do: Ask about speed and reliability separately

3. Loaded questions

Questions with emotionally charged words or assumptions.

Don't: "How frustrating is our complicated checkout process?"
Do: "How would you rate the checkout process?" (Very difficult - Very easy)

4. Vague questions

Questions that could mean different things to different people.

Don't: "Do you use our product regularly?"
Do: "How often do you use our product?" (with specific frequency options)

5. Jargon and technical language

Using terminology your audience might not understand.

Don't: "How's our omnichannel experience?"
Do: "How easy is it to switch between using our mobile app, website, and support chat?"

6. Assuming knowledge

Asking about things users might not know about.

Don't: "How do you rate our new algorithm?"
Do: "How relevant are the search results you see?" (Very irrelevant - Very relevant)

7. Inappropriate response scales

Scales that don't match the question or lack balance.

Don't: 1-5 scale with options: "Excellent, Good, Fair, Poor" (where's 5?)
Do: Use balanced, labeled scales: "Excellent, Good, Neutral, Poor, Very Poor"

Choosing the right question type

Different questions serve different purposes. Here's when to use each type:

Visual guide showing different survey question types including rating scales, multiple choice, open-ended text, yes/no questions, and Likert scales with examples of when to use each

Rating scales (1-5 or 1-10)

Best for: Measuring satisfaction, agreement, or intensity of feeling

Example: "How satisfied are you with the onboarding process?" (1 = Very dissatisfied, 5 = Very satisfied)

Pro tip: Always label the endpoints. Don't make people guess what "1" means.

Multiple choice

Best for: When you know the possible answers and want quantifiable data

Example: "Which feature do you use most often?" (List of features)

Pro tip: Always include "Other" with a text field unless you're 100% certain you've covered all options.

Open-ended text

Best for: When you need rich, qualitative insights or don't know what answers to expect

Example: "What's the biggest challenge you face with our product?"

Pro tip: Use sparingly. People hate typing on surveys, especially on mobile. Make it optional when possible.

Yes/No questions

Best for: Clear binary choices or screening questions

Example: "Have you used the export feature?"

Pro tip: Often "Yes / No / Not sure" is better than pure binary. People appreciate the honesty option.

Likert scales (Strongly agree → Strongly disagree)

Best for: Measuring agreement with statements

Example: "The product solves my problem effectively." (Strongly disagree - Strongly agree)

Pro tip: Use 5 or 7 points. Odd numbers give people a neutral middle option.

Writing for different survey goals

The questions you ask depend on what you're trying to learn:

For NPS (Net Promoter Score) surveys

Stick to the standard: "How likely are you to recommend [product] to a friend or colleague?" (0-10 scale)

Then follow up with: "What's the main reason for your score?"

This combination gives you the metric AND the context to understand it.

For feature feedback

Focus on specific, actionable aspects:

  • "How easy was this feature to use?" (Scale)
  • "Did this feature solve your problem?" (Yes / Partially / No)
  • "What would make this feature more useful?" (Open text, optional)

For usability testing

Ask about task completion and friction points:

  • "Were you able to complete what you set out to do?" (Yes / No)
  • "How easy was it to [specific task]?" (Scale)
  • "What, if anything, was confusing or unclear?" (Open text)

For customer satisfaction (CSAT)

Keep it simple and specific to the interaction:

  • "How satisfied were you with [specific experience]?" (Scale: Very dissatisfied - Very satisfied)
  • "What could we have done better?" (Open text, optional)

The power of question order

The sequence of your questions matters more than you think:

Start broad, get specific

Begin with general impressions before diving into specifics. This prevents later questions from biasing earlier ones.

Good order:

  1. "Overall, how satisfied are you with our product?"
  2. "How satisfied are you with the dashboard?"
  3. "How easy is it to find the export button?"

Save sensitive questions for later

Build trust with easy questions first. Demographics and personal questions work better at the end.

Group related questions together

Don't jump randomly between topics. It's confusing and increases abandonment.

Making questions work on mobile

Over 60% of surveys are completed on mobile devices. Your questions need to work on small screens:

  • Keep question text short: 10-15 words maximum
  • Use tappable buttons instead of dropdowns: Easier to tap than select
  • Minimize typing: Every character is a burden on mobile
  • Use sliders sparingly: They're fiddly on touch screens
  • Test on actual phones: What looks good on desktop might be terrible on mobile

Testing your questions before launch

Don't skip this step. Testing saves you from collecting useless data:

Cognitive walkthrough

Ask 5 people to read each question out loud and explain what they think it's asking. You'll be shocked by the misinterpretations.

Pilot test

Send your survey to a small group (20-30 people) before full launch. Look for:

  • Drop-off points (where people quit)
  • Questions everyone skips
  • Unexpected answer patterns
  • Confused responses in open text fields

The "would I answer this?" test

Read each question and honestly ask: Would I personally take time to answer this? If not, cut it or improve it.

Advanced techniques for honest responses

Once you've mastered the basics, try these strategies:

Normalize negative answers

Instead of: "Do you exercise regularly?"
Try: "Many people find it challenging to exercise regularly. How often do you exercise?"

The preamble makes it psychologically safe to admit the "bad" behavior.

Use "other people" framing

Instead of: "Do you find our product confusing?"
Try: "Some users find certain features unclear. Have you experienced any confusion?"

People are more honest when they don't feel singled out.

Ask about specific behaviors, not attitudes

Instead of: "Do you value security?"
Try: "How often do you change your passwords?"

Behavior is more honest than self-reported values.

Provide context for scales

Instead of: "Rate our customer service" (1-10)
Try: "Compared to other companies you interact with, how would you rate our customer service?" (Much worse - Much better)

Relative scales give you more meaningful data than absolute ones.

Common mistakes even experienced researchers make

Making everything required

The problem: People abandon surveys when forced to answer questions they can't or don't want to answer.

The fix: Only require questions you absolutely need. Make follow-ups optional.

Not testing different phrasings

The problem: You assume your first draft is perfect.

The fix: A/B test question wording. Small changes can dramatically affect response quality.

Asking questions you can't act on

The problem: You're just curious, but it wastes respondent time.

The fix: Before including any question, ask: "What will I do with this data?" No clear answer? Cut it.

Using different scales within one survey

The problem: Switching between 1-5, 1-10, and agreement scales confuses respondents.

The fix: Pick one scale type and stick with it throughout the survey.

Your question-writing checklist

Before launching your survey, run each question through this checklist:

☑ Is the question clear and specific?

☑ Is the language neutral and unbiased?

☑ Does it ask only one thing at a time?

☑ Do the answer options match the question?

☑ Will my target audience understand all the words?

☑ Can I take action based on the possible answers?

☑ Would I personally answer this question honestly?

☑ Does it work well on mobile devices?

If you can't check all boxes, revise the question.

Real-world example: Transforming a terrible survey

Let's see how these principles work in practice. Here's a real (anonymized) survey I recently reviewed:

Original survey (what not to do)

  1. "How awesome is our product?" (Scale 1-10)
  2. "Do you like the features and design?" (Yes/No)
  3. "Would you recommend us to others because we're the best?" (Yes/No)
  4. "What's your age, income, and job title?"

Problems: Leading language, double-barreled questions, assumes product is "the best," asks for sensitive info upfront.

Revised survey (how to do it right)

  1. "How well does our product meet your needs?" (Scale: Not at all - Extremely well)
  2. "How satisfied are you with the feature set?" (Scale: Very dissatisfied - Very satisfied)
  3. "How satisfied are you with the design?" (Scale: Very dissatisfied - Very satisfied)
  4. "How likely are you to recommend our product to a colleague?" (0-10 NPS scale)
  5. "What's the main reason for your score?" (Open text)
  6. "What's your primary use case?" (Multiple choice with "Other" option)

Improvements: Neutral language, single-focused questions, standard NPS format, removed unnecessary demographic questions, asks about use case instead (more actionable).

Tools and resources

Writing great questions is easier with the right tools:

Ask Users provides survey builders with built-in question templates following best practices. You can also:

  • Preview exactly how questions appear on mobile
  • A/B test different question phrasings
  • See real-time response quality metrics
  • Get suggested improvements for common question problems

Ready to collect better feedback?

Ask Users helps you create surveys with questions designed to get honest, actionable responses. Start with proven templates or build your own from scratch—with real-time preview on every device.

Start free See survey examples →

Your action plan

Here's how to immediately improve your survey questions:

This week:

  • Review your current survey questions using the checklist above
  • Identify and fix any double-barreled questions
  • Remove or rephrase leading questions
  • Test your survey on mobile

Next week:

  • Pilot test with 20 people
  • Analyze completion rates and question-level drop-offs
  • Revise based on feedback
  • Launch improved version

Ongoing:

  • Monitor response quality (specific vs. vague answers)
  • A/B test different question phrasings
  • Keep a swipe file of great questions you encounter
  • Regularly review and update your surveys

Final thoughts

Great survey questions aren't about fancy wording or complex psychological tricks. They're about respecting your respondents enough to make questions clear, neutral, and worth their time to answer.

The difference between "How awesome is our product?" and "How well does our product meet your needs?" might seem subtle. But one gets you useless praise, while the other gets you actionable truth.

Your users are willing to give you honest feedback. Your job is to write questions that make honesty easy.

Frequently asked questions

How many questions should a survey have?

Quality over quantity. A 5-question survey with excellent questions beats a 50-question survey every time. Keep surveys under 5 minutes (roughly 8-10 questions). For quick pulse checks, 1-3 questions is ideal.

Should I use odd or even number rating scales?

Odd numbers (1-5, 1-7) give people a neutral middle option. Even numbers (1-4, 1-6) force them to lean positive or negative. Use odd if you want to allow "no strong opinion." Use even if you need to force a direction.

What's the ideal length for question text?

Aim for 10-15 words maximum. If you need more context, use a brief intro text before the question itself. Mobile users especially appreciate brevity.

Should I randomize answer options?

Yes, for lists of equal options (like features). No, for scales or ordered options (like frequency or agreement scales). Randomization prevents order bias in choice lists.

How do I handle "I don't know" or "Not applicable" options?

Always include them when relevant. Forcing people to choose when they genuinely don't know or the question doesn't apply creates bad data. Make these options visually distinct (like "N/A") so people don't choose them out of laziness.

Is it okay to use emojis in survey questions?

Emojis work great for rating scales (😞 to 😄) as long as they're clear and accessible. Avoid them in the question text itself—they can seem unprofessional depending on your audience. Always test with your specific demographic first.

Next steps

Want to dive deeper into creating effective surveys?

A

Ask Users Team

Product & Research

We help thousands of companies collect and analyze user feedback to build better products.

Ready to collect better user feedback?

Start creating surveys and forms in minutes with Ask Users

Start for free