Last week, Google rolled out its new Google Consumer Surveys service. Finally, “statistically significant, valid results at scale” at a price anyone can afford — only $0.10 per response! But is letting the untrained masses create “scientific” polls dangerous?
I’m not a professional pollster, nor have I had training in creating polls. I’d be very curious on what those who do polls for a living think about the service, and I might look into that more later. I imagine some will talk it down because it does, after all, potentially undercut the expensive opinion polls out there.
I am, however, a veteran of having to deal with polling data as a journalist for over 20 years. I’ve seen all types of problems with polls. The biggest problem isn’t that they weren’t done “scientifically” but rather in how they posed questions. That’s something that the new Google service doesn’t fix.
Consider the poll I ran and wrote about today in Survey: Nearly 80% Trust Google As Much Or More Than A Year Ago. I asked, ”Do you trust Google as much as a year ago?” That could have been posed as a purely yes/no question. Had I done that, I wouldn’t have detected that the vast majority of people actually say they trust Google the same as a year ago. The results would be massively different — and they were. I know this, because out of curiosity, I ran the question again as a yes/no.
What if I’d asked something like “Is Google Evil?” Charging the question up in that way could have given me different results.
The press already has a tough enough time dealing with “professional” polls, interpreting what they accurately show or subjecting them to proper critical analysis. Now we potential face a flood of new polls from people with no training at all.
That didn’t happen, obviously. The saving grace for the Washington Post is that the fine print of the poll said:
This is a non-scientific user poll. Results are not statistically valid and cannot be assumed to reflect the views of Washington Post users as a group or the general population.
Given all those disclaimers, you wonder why they bothered to run it at all. But today, it could easily run a “scientific” poll using Google’s service and get data that’s designed to be reflective of the internet population in the United States, data that Google even backs as accurate with a white paper.
End result? Potentially we get a lot of bad polls that have the appearance of being accurate.
I put some of these concerns to Paul McDonald, who is the senior product manager for Google Consumer Surveys. Here’s our email exchange:
Danny: The service comes across as allowing anyone to do scientific polling. But if you’re not careful about how you ask a question, you’re going to get skew. If you don’t think carefully about the questions themselves, you get skew.
Paul: I think your concerns about the quality of the data from self-service survey platforms are well known in the research community, as the mantra goes “Garbage in, garbage out”, though I wouldn’t characterize it as “dangerous.”
We try to encourage survey best practices in our help content, program policies and by providing survey templates to guide new researchers. For example leading questions, push-polling and irrelevant survey text are all prohibited.
We also use statistical methods to ensure that viewers of the survey data understand the margin of error and potential bias introduced by our sample via the error bars on the charts and sample bias table at the bottom of the report.
In the end we are providing a platform which can be used to create professional and statistically accurate surveys. We believe users who view these reports can spot questions that will lead to a biased result and will make a call on the effectiveness of the survey creator.
Danny: The polling method seems scientific in nature, but if you ask bad questions, or if you don’t know how to interpret the results, ugh. I don’t know of any other self-serve polling system this inexpensive that allows this type of scientific surveying.
Paul: I hear you, I guess my issue is with your positioning. You seem to be saying that Google Consumer Surveys is more responsible for this than any other survey platform. There are definitely things we can do, some of which we’ve already addressed, to help researchers create good surveys but as you mentioned even scientific surveys done by professionals can have issues. One of our value propositions is that with our platform you can hand a link to our reports and other people can analyze the results, data and question themselves.
I do love that Google Consumer Surveys allows for anyone to share their full data, though I’m hoping this will be improved by also adding some guidance for attribution, so that if people link back to the survey, you understand the organization that ran the poll and give proper credit.
I also love the other type of polling that the product seems especially aimed at, helping marketers get back consumer reactions to product changes, needs, desires, logos and more. Be sure to see the case studies page and the how it works page for examples of this.
As for my worries about broader opinion polls, perhaps the new service may help there in promoting more critical thinking about polls in general. Even the “scientific” polls from trained professionals often fail to measure up, especially if those professionals have deliberately constructed a poll to reach a particular conclusion. If there’s a flood of new polls that develop, all polls might get held to a higher standard.