survey

How to Write Surveys People Actually Take

The goal of all honest surveys is to get information that accurately depicts reality (e.g., to find out what your customers or employees really think). That’s the gold at the end of the survey.

Many factors influence accuracy, including how representative the sampling is and what sorts of biases the respondents bring to the table. A big factor is the writing itself. Flaws that undermine writing elsewhere like lapses in logic or ambiguous wording undermine surveys as well.

Surveys need to be:

  1. Unambiguous
  2. Easy
  3. Empathetic
  4. Balanced
  5. Logical

1—Unambiguous

Part of the problem is that we all come from different perspectives and experiences, so meaning varies from person to person. Survey language has to cut across all demographics and life experiences.

We have to use simple words tied to universally recognized examples. We have to be neutral in our word choice—neither formal nor casual—and use simple phrasing and grammar rather than jargon or slang.

No: “How recurrently do you imbibe alcoholic intoxicants?”

No: “How much do you get your drink on?”

Yes: “How often do you drink alcohol?”

Any language that isn’t stripped and crystal clear can be ambiguous or just plain confusing—

  • Double negatives are instantly confusing. You can’t not go back and reread most double negatives.
  • Leading questions cause trouble because they push respondents in one direction or another. They skip steps. Respondents end up promoting something without even realizing it.

No: “Which of these do you like best?”

Yes: 1) “Do you like any of these?” 2) “Which do you like best?”

  • Open-ended questions, whether they’re ambiguous or not, can lead to ambiguous responses. Also, they depend on researcher interpretation. Closed-ended questions narrow things onto a single focus.

2—Easy

The only thing we want respondents to focus on is their answers to our questions. Therefore, we need to make getting through the survey easy for them. Which means we have to do the hard work.

Similar to television commercials, surveys only have a limited time to grab someone’s attention. We need to make an impact and then keep their attention throughout the survey.

For starters, surveys can’t be too long. A survey of 200 questions isn’t going to keep anyone’s attention. In fact, it won’t get anyone’s attention, not after they see how long it is.

Respondents want to feel like they’re moving quickly through the survey. Researchers want that, too, because it means respondents are more likely to actually finish and submit answers for the entire survey.

Also, the questions and answers themselves can’t be too long. They need to get to the point and stay there. Asking really long questions or multi-part questions requires work from the reader. It’s not a press conference, and the reader isn’t a seasoned politician practiced in answering multi-part questions. Short questions and short answers are key.

3—Empathetic

Do we have an extra half-hour in our day to take a survey for some company we bought something from once? We might, but probably not. Do we have five minutes for the same company? Probably.

Creating good surveys requires a little shoe-on-the-other-foot action. Sure, you know what info you want to get from respondents, but are you asking it in a way that makes them respond? Would you respond?

And are you asking it where they’re likely to see it? Whether a researcher thinks a particular social media site is useful or not doesn’t matter—it’s whether potential respondents think it’s useful. Because if they think it’s useful and are there, researchers need to be there, too.

4—Balanced

It’s actually not that hard to skew a survey, intentionally or otherwise. Surveys that don’t have balance in their questions will automatically yield skewed results. It’s not that different from asking leading questions, it’s just done on the answer side rather than the question side.

No: “How often do you watch television? A) One hour per day, B) Two hours per day, C) Three hours per day…”

Yes: “How often do you watch television? A) Never, B) Less than one hour per day, C) One hour per day, D) Two hours per day…”

The first set of answers assumes that everyone watches TV for at least an hour a day. In fact, it becomes a self-fulfilling prophecy for respondents of the survey, who likely will choose “one hour per day” even if they watch less than that. The second set takes into account that not everyone has a television or watches it every single day. It’s broader, comprehensive.

Of course, I’d be remiss not to mention surveys designed specifically to get skewed results—usually to back up one side of an argument. In the examples above, the first question could very well serve to boost numbers on how much TV Americans watch.

5—Logical

Surveys have to proceed in a logical fashion from question to question, for clarity. And if a survey tells a story or feels like it’s going in a defined direction, it’s much more likely to capture and keep respondents’ attention.

For example, we wouldn’t want to ask someone how the food is in Madrid until we know whether they’ve been to Madrid. And for further clarification, we’d probably want to know if the respondent is a sophisticated foodie, a vegan or someone who eats fast food every day.

Think of it as general-moving-toward-specific. We need baseline information before we can get to the information we really want. In the example above, our end goal may be to discover which city American foodies prefer—Madrid or Paris. But before we can get to which city has better food, we need to filter the pool of respondents down to American foodies only.

And just like with leading questions, question order can influence responses. If we’re trying to find out which vice people would rather give up—coffee or alcohol—we probably don’t want to start with a series of questions about how good one or the other is. Whichever one we’ve got them thinking about is the one they’ll be less likely to give up.

Getting to the Gold

There’s a pot of gold (data) waiting for us on the other side of surveys, but we have to reach it first. We have to engage respondents and keep them engaged. We have to understand that we’re all different, and surveys have to cut through those differences.

Flaws undermine good writing in surveys like anywhere else, making it harder for readers to understand the message. That misunderstanding turns into inaccurate data with surveys. Which then turns into misguided strategic decisions.

To get rid of ambiguity and actually reach the gold in the pot, we have to strip survey language and build surveys correctly. We need to make them unambiguous, easy, empathetic, balanced and logical.