I need your help.
I have a problem and want your opinion.
I’ve been running the Domain Name Wire Survey for seven years. It asks people for their opinions on the state of the domain industry and which service providers are best.
The survey was never scientific, but the results seemed to jive with reality. Yet last year social media campaigns by service providers started to skew the results. This year it went to a whole new level with some companies you’ve never heard of suddenly receiving hundreds of write-in votes.
I don’t blame these companies for asking their loyal customers to vote for them. In a way it’s quite telling; if your customers take the time to fill out a survey on your behalf then they really like you. On the other hand, I don’t think the results have any basis in reality.
I have a few options, none of which are perfect:
1. Exclude survey results from people who self-reported owning fewer than 50 domains. Many people who respond to social media pushes aren’t really the target market for this blog and own very few domains. The results look more “normal” when you remove these survey takers, which represent about half of the 1,600 respondents. However, I hate to exclude someone’s voice just because they aren’t as big as others.
2. Post the results as-is but with my commentary on why they came out the way they did.
3. Don’t post results on questions about service providers.
What would you like me to do?
Paul says
please post results, less the “spam” votes.
i trust your judgement.
Adam says
Show the before and after. 2 sets of results, one with the spam included and one without.
Marg says
Agree with the above two comments. Post the “real” results and then the possibly skewed ones. Just let us know which is which!
JS says
Perhaps next year you could partner with the various domain forums (logged in users + unique email (and of course IP)) to participate. Easier then to weed out the newbies (assuming they are the ones being “brainwashed”).
2e says
you should report it as you get it we like to see other players than godaddy register.com and nsi.com may be someone will emerge from your dead list to the live list.
Kevin Ohashi says
I think posting both would be interesting. I don’t you’re going to be able to stop people from trying to manipulate something if there is something to gain. It’s a game of cat and mouse.
Tia Wood says
Do away with the write ins and do nominations first separate from actual voting, imo.
S.W. says
Agree with the above sentiment of posting both results. Filtering out folks with < 50 domains might be effective for this year only, next year people will probably try to game that as well.
Andrew Allemann says
Good idea, Tia. Yet last year one of the non-write-ins got inflated.
Steve says
You could weight the results to match the sample from last year (or match it to an average of the samples from the the last few years). This would allow for better comparability to previous year(s) results since the sample ‘characteristics’ would match.
As example, using only the ‘# of domains’ variable to calculate weight factors (…although, you can use multiple variables to produce a weight factors):
Past Years:
2011: 35% (<50), 55% (51-1000), 10% (1001+)
2010: 32% (<50), 52% (51-1000), 16% (1001+)
2009: 38% (<50), 48% (51-1000), 14% (1001+)
~ AVG: 35.0% (<50), 51.7% (51-1000), 13.3% (1001+) ~
This Year:
2012: 50% (<50), 26% (51-1000), 4% (1001+)
So, to make this years sample comparable to the AVG of the previous three years, the weights would be:
0.700 (<50)
1.988 (51-1000)
3.325 (1001+)
In this way, you do not 'throw out' any respondents. Instead, if indeed the # of '<50' domainers is abnormally large compared to previous years, you 'weigh down' that segment (in this case by factor of 0.700) and 'weight up' the 51-1000 & the 1001+ segments by factors of 1.988 and 3.325 respectively.
It's normal in survey research to 'weight' the survey sample so it's demographics/characteristics exactly matches the population you are trying to measure. For instance, if you wanted to report how the average 'North American' felt about a topic, you might survey 1,000 from each of Canada, USA & Mexico. Since the population sizes of the 3 countries are significantly different (34 mil, 313 mil & 112 mil respectively) you would need to apply 'country weights' to correct the survey sample so it matched the actual North American population, in this case:
Country Weights:
0.22 (Canada)
2.06 (USA)
0.78 (Mexico)
Steve
(Market Researcher by day; Domainer by night)
Lucas says
agree with S.W. & above: raw+filtered.
Also:
“However, I hate to exclude someone’s voice just because they aren’t as big as others.”
Yes, but you are not excluding them for being small. Being small indicates they are not the “target market for this blog” (as you well observed), and this is the reason why they are excluded.
Joe says
#1 is out of the question. I’m undecided between 2 and 3, but I’d probably go with #2.