If it’s news to you, I’m sorry to have to be the one to tell you: Some people collect data with no real outcome in mind. They’re just going through the motions because someone else told them to, because it’s just “what we do” at this time of year, or — worse — because they care more about the appearance of outreach and engagement than actually doing anything with the results.
This kind of no-outcome outcome is what gives a bad name to surveys, polls, and even the occasional census. Why should anyone bother responding to questions if nobody will ever even read their answers?
Clearly, it doesn’t need to be this way — and it should never be this way. But what’s the problem? Do the people administering these studies simply not care at all? Are they too lazy to do anything with the responses? Or are the results just too difficult to figure out? It’s fair to say that any of these answers might truly apply, depending on the situation, but the last one is the most actionable.
After all, if you know what your report is telling you, wouldn’t you be more likely to follow up? Let’s hope so. If your answer is no, stop conducting research projects — and stop reading this post.
The basics: Start with an overview
Once you’ve finished collecting results in your research project, start by looking at the big picture first. This is especially important when you have some guesses or hypotheses about what you’ll learn. If you’re already completely sure of the answers, why are you even asking the questions? We do research because we don’t already know everything, and because there’s a chance the outcome might be completely unexpected. To flip to a sports metaphor, it’s why we play the game.
The example data set we’ll use here is from a real research project conducted in May on a very timely topic. Over a few short days, we collected survey responses through Facebook from 146 adults in the US. The eight questions in this approximately two-minute survey asked about participants’ vaccination status or plans, their likelihood to wear a mask or socially distance over the following days, the influences on their decisions, and their comfort in engaging in certain activities without a mask. A few additional demographic details were also collected to enable additional comparison and analysis. The first four questions were mandatory, and the questions about comfort and demographics were optional.
For an overview, my current go-to is the Omni Report. I’m embedding the raw report below in slideshow view to give you an idea of what’s covered here and how it looks without any real effort or decisions on my part. In full disclosure, I’ve hidden a few of the metadata items — language and method of participation were the same for all respondents, so not necessarily too exciting.
Results by question type: See for yourself
As you review the big picture, some things stand out to you. On a single-select Radio Button question, like the first one, a bar graph makes it easy to see quickly that 51% of respondents are not planning to be vaccinated. Depending on your perspective, this may or may not be a surprising result, but it’s already a majority — with the potential to rise if the ‘Not vaccinated yet’ contingent chooses not to.
Answers to the Rating Scale questions — likelihood of wearing a mask, likelihood of maintaining social distance — aren’t as clear in simple bar graph form. Asking for answers on a 5-point scale enables you to identify an average weighted score out of five. When someone asks you “How’d we do?” on a rating scale, “Eight percent of people gave a three!” isn’t a very useful or meaningful answer. We’ll revisit rating questions later.
If you add up all of the percentages in the responses to “What influences these decisions most?” you’ll note that this is a multi-select Check Box question. Bar graphs are useful here, although the percentages may be confusing for some. Because the answers are presented in alphabetical order (other than ‘Other’), the graph may also look a little ragged. More on sorting to come.
The question about comfort in performing different activities without a mask is a Rating Radio Grid question, meaning that multiple related questions are answered on the same rating scale in a condensed form. Again, there’s the question of the best way to view answers to a rating question. Is it more useful to know that 68% of respondents said they were Very Comfortable spending time with friends and family without a mask or to know the overall average? Adding on the grid formatting also introduces two additional considerations: Is it valuable to see side-by-side the ratings for different activities, or is it better to know the overall comfort level for all of the activities combined?
No open-ended questions were included in this survey, but the Facebook comments in response to the survey post itself offer plenty of room for further analysis. We’ll set those aside for now.
Now what?
From the raw data as a whole to the variations in question types, not to mention the analytical opportunities afforded by the demographic items, there’s plenty of room to explore. Consider this a ‘choose your own adventure’ style process — because it is!
If you want to focus in on rating questions — including reviewing results by Percentage, Weighted Score, Percent Favorable, or Net Intent — that’s a good way to make your data as meaningful as possible very quickly. You can also consider combining rating questions to formulate overall scores for your report, too — a composite score, score by groups of questions, or scores by question.
If you’re interested in drilling down — including segmenting data, filtering results, and running Cross Tabs in the same report — that’s a smart way to better understand what variables may be impacting specific perceptions or behaviors.
If you just want it to look nice — including cleaning up answer options (hiding, merging, renaming, sorting), revisiting the data visualization (graph type, color palettes, etc.), and choosing whether to hide or display tables, statistics, or trend data — it’s easier than you think, and it’s probably a good move before you get ready to share.
If you want to share results — including PowerPoint downloads, dynamic links, password-protected access, and reports that can be filtered by the recipients — do it!
In some simple cases, you might understand everything you need to know from the at-a-glance overview report. In most cases, though, that’s just the beginning. In the next part of this series, we’ll continue the exploration and learn more about both the results of this actual study and the tools we use to improve understanding. Stay tuned!
Want to dive deep into your data? Join a live workshop focused on building custom reports!