CHAPTER 9 | Surveys

An Excerpt from Just Enough REsearch 2024

A SURVEY IS A METHOD of collecting data from a predefined group of people using standardized questions. The questions can be asked in person or over the phone, or distributed on paper or collected online. The proliferation of online survey platforms has made it possible for anyone to create a survey in minutes.

This is not a good thing.

Surveys are the most dangerous research tool — misunderstood and misused. They frequently blend qualitative and quantitative questions; at their worst, surveys combine the potential pitfalls of both.

A lot of important decisions are made based on surveys. When faced with a choice, or a group of disparate opinions, running a survey can feel like the most efficient way to find a direction or to settle arguments (and to shirk responsibility for the outcome). Which feature should we build next? We can’t decide ourselves, so let’s run a survey. What should we call our product? We can’t decide ourselves, so let’s run a survey.

If you ever think to yourself, “Well, a survey isn’t really the right way to make this critical decision, but the CEO really wants to run one. What’s the worst that can happen?”

Brexit.

EASY FEELS RIGHT

It is too easy to run a survey. Surveys are easy to create and easy to distribute, and the results are easy to tally. And our poor human brains are biased toward information that feels easy for us to process, regardless of reality. This ease makes survey results feel true and valid, no matter how false or misleading they are.

Surveys also shut out paths to genuine learning. Talking to real people and analyzing the results? That sounds hard. Blasting questions out to thousands of people to net a pile of quantifiable data without gross human contact? Easy!

It’s much harder to write a good survey than to conduct good qualitative user research — something like the difference between building an instrument for remote sensing and sticking your head out the window to see what the weather is like. Given a decently representative (and properly screened) research participant, you could sit down, shut up, turn on the recorder, and get useful data just by letting them talk. But if you write bad survey questions, you get bad data at scale with no chance of recovery. It doesn’t matter how many answers you get if they don’t provide a useful representation of reality.

A bad survey won’t tell you it’s bad. Bad code will have bugs. A bad interface design will fail a usability test. A bad user interview is as obvious as it is uncomfortable. But feedback from a bad survey can only come in the form of a secondary source of information contradicting your analysis of the survey results.

Most seductively, surveys yield responses that are easy to count, and counting things feels certain and objective and truthful. Even when you are counting lies. And once a statistic gets out — such as “75% of users surveyed said they love videos that autoplay on page load” — that simple “fact” will burrow into the brains of decision-makers and set up shop.

From time to time, designers write to me with their questions about research. usually these questions are more about politics than methodologies. A while back this showed up in my inbox:

Direct interaction with users is prohibited by my organization, but I have been allowed to conduct a simple survey by email to identify usability issues.

Tears of sympathy and frustration streamed down my face. This is so typical, so counterproductive. The question was, of course, “What do I do about that?”

Too many organizations treat direct interactions with users and customers like a breach of protocol. I understand that there are sensitive situations, often involving personal data or early prototypes or existing customer relationships. But you can do perfectly valid user research or usability testing and never interact with current customers or current users, or reveal company secrets, or require people to breach privacy.

A survey is a survey. A survey should never be a fallback for when you can’t do the right type of research — because designing a good survey is not easy. Surveys are the most difficult research method of all.

MATH FIRST

Managers shouldn’t trust a model they don’t understand.

—Tom C. Reman, Data Driven: Profiting from Your Most Important Business Asset

Designers often find themselves up against the idea that survey data is better and more reliable than qualitative research just because the number of people it is possible to survey is so much larger than the number of people you can realistically observe or interview.

Taking small samples from large populations is a valid statistical technique for getting accurate information about the wider population. However, getting a truly representative sample requires great care. As the Pew Research Center puts it: “A survey sample is a model of the population of interest.” The more your sample differs from the population at large, the more sampling bias you are dealing with, and the less accurate the model.

So, unless you are very careful with how you sample, you can end up with a lot of bad, biased data that is totally meaningless and opaque.

If you survey enough representatives of a population, the results will probably be representative, all other things being equal. This doesn’t mean the answers to your questions will be true — simply that they will represent how that population as a whole would have answered those questions. (Maybe everyone lies about their spending habits or motivations in similar ways!)

And it’s possible to fiddle with the stats to justify your favorite definition of “enough” representatives. Bad researchers manipulate results to make it seem like the conclusions are more definitive (that is to say, statistically significant) than they are.

Into the woods

I will now use a fantasy analogy to explain survey-sample math at a high level. A course in basic statistics is a good idea before you start surveying— or using any quantitative method — but you need to understand why most quantitative research has a higher proportion of ish than many people would like to believe.

Imagine you are studying the centaurs who live in the Foloi oak forest. You want to survey the centaurs in order to be more effective at selling them waist packs in the coming season. (How did you think centaurs carried their snacks?)

Your target population is all the centaurs in the forest. The set of centaurs you have the ability to contact is your sampling frame. (Ideally, the population of centaurs and your sampling frame are the same, but maybe your centaur mailing list is out of date.) The sample is the subset of individual centaurs whose specific data you actually collect. The goal is to be able to generalize to all the centaurs in the forest from the sample…

To keep reading — and start getting your surveys right — get your copy of Just Enough Research 2024. Only $24!