mobile-survey

What Are Surveys Again?

We know what surveys are and what they’re for, right? Don’t we?

Actually, there seems to be a lot of confusion out there about surveys. What they are, how they’re done, how valid they are, et cetera. Given how much survey techniques are changing these days, I thought a review was in order.

Here’s a nuts-and-bolts overview that hopefully informs and/or reminds us what surveys are all about…

What Are They, Anyway?

There are a lot of different types of surveys, but they’re all just questionnaires, basically. They’re a means of gaining information that we can use for whatever purpose we want or need.

In politics alone, there are many commonly used types of surveys. Benchmark polls show where things stand at the beginning of the campaign—they’re the baseline for subsequent polls. Brushfire polls are little testers for checking in and trying out new ideas on the public. Tracking polls follow trends over the course of a campaign. Push polls actually influence a population instead of surveying them.

There are also a number of ways we can ask these questions, and a number of ways we can reach our audience (more on those later). But separating surveys into broad categories, they basically fall into cross-sectional, longitudinal, unscientific or scientific.

Cross-Sectional Surveys

These target a single group at a single point in time. These are the one-hit-wonders—the Vanilla Ice’s of the survey world. If you want to find out how a company handled customer service over the holidays, go for the cross-sectional survey.

Longitudinal Surveys

These target groups over time. They’re the old, grizzled veterans—the Rolling Stones of the survey world. They’re not a snapshot, they’re a movie; looking at a group regularly over an extended period of time. Say, if you wanted to see how a company handled customer service over the holidays every year from 2003 to 2013.

Longitudinal surveys include cohort, panel and trend studies. Cohort studies look at a group over the years, although not necessarily the same exact people. Panel studies look at the exact same people within the same group over time. Trend studies focus on changes within a group over time. A trend study might survey new hires every year, a cohort study might look at people who were new hires in 2010 over time and a panel study might survey the exact same 2010 hires repeatedly over time.

Unscientific Surveys

These are not based on scientific methods, not to be too obvious. They’re the Pennys (Big Bang Theory) of the survey world. They include straw polls and honor-system polls—informal probes into current trends.

Straw polls might be the first surveys conducted in the United States. The name refers to gauging wind direction and speed by throwing stalks of straw into the air. They first arose in the newspaper industry, where local papers would conduct informal surveys.

They mostly appear these days when people gather in groups and want to find out what the group thinks about something. Political groups or unions using them at meetings, for example.

Honor-system polls are the least scientific and probably the least accurate type of survey. They tend to throw the rules out the window. They can be online or go on for an indeterminate time; they can allow or even encourage respondents to participate more than once. If you’ve ever voted for all-stars in the NBA or NFL, you know what I’m talking about—the more votes a player gets, the more likely he’ll get in, so there’s incentive to vote repeatedly for your favorite players and essentially rig the vote.

Scientific Surveys

These are based on scientific methods, again not to be too obvious. These are the Leonards (Big Bang Theory) of the survey world—they’re the most scientific, accurate types of surveys. They usually focus on identifying representative population samples and providing questions that don’t influence the answers. They take a little more explaining.

When done correctly, scientific polls employ a good deal of demographics analysis and thoughtful methodology to ensure proper representation in the survey. They also involve analysis of language, socio-economics and culture to prevent leading questions or bias.

Yet even without cultural or socio-economic concerns, researchers are faced with several potential pitfalls: wording of questions, coverage bias, nonresponse bias and response bias.

Wording of Questions

“The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way,” says the Pew Research Center. “Even small wording differences can substantially affect the answer people provide.”

Pew points to one of its own surveys as an example of how wording can affect respondents’ answers. In 2003, Pew asked participants whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” and 68% said they would. But they also asked participants whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties,” and only 43% said they would.

For most people, war means casualties, but casualties might not be in the front of their minds when they take the survey. By mentioning casualties, it reminds respondents and tempers their enthusiasm for military action. So even something as basic as the amount of information in the question can influence answers.

Coverage Bias

This is unrepresentative sampling resulting from flawed or simply incomplete methodology, a good recent example of which is the effect mobile phones have had on phone surveys.

Back in the day, you either reached someone on the phone at home or on the phone at work. But mobile phones have changed that completely. Many people these days don’t even have landlines at home and many people use their mobile phones for work. In fact, more and more people every day are simplifying their lives by making their mobile phone their one and only phone.

For researchers, mobile phones have thrown a huge a monkey in the wrench. With many more ways to reach someone via phone, getting representative samplings have become much more complicated. Do you call their home phone, if they have one? Their mobile? Work? And when do you call? During the day for the work phone, night for the home and mobile?

In fact, coverage bias has gotten harder not only because of mobile phones but also because of email, online surveys, social media, et cetera. There are many more channels through which to reach participants these days (more on that later).

Nonresponse Bias

Quite simply, researchers don’t reach everyone. It’s possible they could contact half of a population and, by chance, only get one side of an argument, nullifying the survey.

(Generally speaking for scientific surveys, researchers talk about a 3% margin of error for a survey of 1,000 people and 1% margin for a survey of 10,000 people.)

Nonresponse bias is curious. For example, some people distrust surveys and won’t ever take one. They represent a population subset in and of themselves and exist in every demographic and location. Yet their views will never reach pollsters.

In 1992, political polls all but guaranteed a Labour party victory over the incumbent Conservative party in British parliament. Yet when election results came out, the Conservatives had kept their control.

Now referred to as the Shy Tory Factor, many Conservatives were unwilling to admit in pre-election surveys that they planned to vote conservative because their party was falling out of fashion. Therefore, the political polls were skewed through nonresponse bias.

Response Bias

This one comes from the respondents. Maybe they just want to get to the end of the survey and rush their answers. Maybe they don’t necessarily answer truthfully, so they can protect their true thoughts, avoid embarrassment, go along with the crowd or even push their own agenda. Whatever, but they don’t answer honestly, which skews the survey.

Response bias is also the Jekyll and Hyde of the biases. Honest researchers looking for informative data (Dr. Jekyll) are trying to get accurate pictures and are hindered by response bias. Meanwhile, dishonest researchers pushing an agenda (Mr. Hyde) use response bias to instead influence the results of the survey.

In political polling, results can actually influence the rest of the population. It’s in politicos’ interest to skew results for their candidates, which can increase genuine positivity and influence elections. It’s cynical, for sure, but it’s a real thing.

And it doesn’t have to be obviously cynical; it can be subtle. Wording of questions and even answers can greatly affect responses. For example, people aren’t likely to jump to answers deliberately worded in extreme language, especially if they don’t have strong feelings either way on a topic.

And, of course, respondents can play the game as well. They may go straight for the extreme answer to boost their side of the issue. Even though they don’t feel as strongly as the answer suggests, they may overcompensate to help the team.

Origins

“Rulers have used census surveys of the population for thousands of years,” reports the Prairie Research Institute. “During the Middle Ages, respondents to such surveys typically consisted of authorities such as the clergy or nobles who reported the numbers and living conditions of their parishioners or serfs.”

These were the first attempts to gauge large population sentiment after human societies grew from small extended-family villages into cities and nation-states. Basically, we couldn’t just walk over to our neighbor’s or have a quick village meeting to see what people were thinking. We turned to representatives—a practice that continued until the end of the 19th century.

(I’ve actually written about this before. So I don’t repeat myself, here’s The Origins of Surveys.)

How They’ve Changed (Briefly)

George Gallup essentially kickstarted the era of scientific surveys when he introduced carefully selected samplings from demographically diverse groups. He famously predicted the 1936 upset election of Harry Truman over Thomas Dewey.

(There’s more about the Literary Digest, George Gallup and “Dewey Defeats Truman” in The Origin of Surveys.)

Lots of things have changed since then, however. Today’s surveys employ advanced techniques and a multitude of communication channels.

Methods have gotten increasingly scientific, employing advanced statistical analysis with statistical modeling and lots of other things I won’t try to write about because statisticians already know them and the rest of us won’t understand them.

Perhaps even more significantly, the communication channels for surveys have changed dramatically. It used to be the phone and mail and that was it, but in the last 20 years we’ve added a bunch more mediums.

Mail

It’s still around, obviously, but mail is expensive for large surveys, and a lot of it doesn’t find respondents because it’s mashed in with the rest of the junk mail that we usually throw away.

The telephone is also still around and can be a primary tool for researchers, especially because it provides quick access. However, as discussed above, mobile phones have fundamentally changed the phone survey.

Email

Email is cost-effective for large-scale surveys, although spam is an issue. When researchers first started using email, they must have been ecstatic with their shiny new toy. Since then, however, spam has corroded the shiny toy, making it more difficult to reach respondents (spam folders) or get them to trust an anonymous email and whatever links are in it. However, email is still very, very common for surveys.

Social Media

The new kid on the block, social media is also the cool kid on the block. It’s the anti-cold-call—surveys go out to people already following the organization, so those people are more likely to trust and participate in the survey. Researchers are more likely to get responses, and those responses can happen in near-real time because social is so fast.

Multimodal Delivery

Because we’re all communicating on numerous channels these days, researchers have to go to all of those channels to find us. Multimodal is the latest trend in surveys, enabling researchers to send surveys out on multiple communication channels—phone (mobile or otherwise), text, email, social media.

Surveys, In a Nutshell

There’s obviously a lot more to surveys than what I cover here, but I just wanted to provide an overview so those new to surveys have a chance to acquaint themselves and those not so new to surveys can refresh their memories.

It’s a good time to do either, actually, because the survey world is changing rapidly. New communication technologies alone have dramatically changed the rules of the game. And there’s no telling what’s on the horizon.