Newsletter Subject

The weird world of online, opt-in surveys

From

vox.com

Email Address

newsletter@vox.com

Sent On

Wed, Mar 6, 2024 11:02 PM

Email Preheader Text

Sites like SurveyJunkie and Swagbucks offer money in exchange for taking surveys, which doesn’t

Sites like SurveyJunkie and Swagbucks offer money in exchange for taking surveys, which doesn’t always produce the best quality data. Researchers use online opt-in surveys all the time. Should they? Search around for ways to make a little extra money online, and you might find yourself at one of many sites that offer to pay you to take surveys. There’s Swagbucks, SurveyJunkie, InboxDollars, and KashKick, for instance. On each of these sites, users are paid small amounts of money for completing surveys, playing games, or making purchases. The surveys on these sites are “opt-in” surveys, meaning that participants are actively choosing to take them, rather than researchers pulling a random sample of a population to poll, as professional pollsters do. Unsurprisingly, opt-in surveys can lead to some skewed results: Earlier this week, [Pew Research Center wrote]( about their analysis of one such opt-in survey that found 20 percent of US adults under 30 believe that “The Holocaust is a myth.” Pew’s attempt to replicate this result via a random sampling of Americans found that just 3 percent of Americans under 30 agreed with an identically worded statement about the Holocaust — a percentage that was more or less the same across all age groups. The analysis also included this incredible tidbit: “In a February 2022 survey experiment, we asked opt-in respondents if they were[licensed to operate a class SSGN (nuclear) submarine](. In the opt-in survey, 12% of adults under 30 claimed this qualification, significantly higher than the share among older respondents. In reality, the share of Americans with this type of submarine license rounds to 0%.” Oof, right? The [Google]( results for survey sites are filled with reviews from people who are mainly concerned with whether these sites are “legitimate” or scams. But the Pew analysis points to another question: Just how good is the data collected for a survey when its participants are incentivized to speed through as many as possible in order to earn cash? The problems with opt-in surveys, explained I dug around and, surprise! It’s complicated. “Errors are introduced (and remediated) in the survey process at every step,” noted David Rothschild, an economist at Microsoft Research. The fact that a survey was conducted online for a small reward isn’t necessarily enough information to analyze data quality in a meaningful way. As Pew noted in its analysis, the Holocaust denial survey used an agree/disagree format that can lead to “[acquiescence bias](” — a tendency for respondents to give an affirmative reply. This means that while the survey collection method might have been part of the problem, the question itself may have also led to inaccurate results. “There are many types of opt-in online audiences; some have strong vetting to ensure the respondents are who they say they are and produce high quality responses, while others just accept whomever without any pre-response quality control,” Rothschild added. Here’s what you need to know. How do online survey sites work? Although there are a couple different models, the online survey sites we are talking about offer small rewards in exchange for survey participation. Most say they try to “match” users to relevant surveys based on the data they collect about their users, and generally speaking, you only get paid if you qualify to take the survey and complete each required question. Typically, these sites pay users in points, which translate to small dollar amounts per survey, if they pass a set of screening questions and complete the entire survey. These points often do not translate to very much money: I created an account on Swagbucks and checked a list of available surveys. They included a 20-minute survey for 119 “Swagbucks,” which translates to … $1.19. Longer surveys may offer more, while some surveys with a 10-minute time estimate offer less than a dollar. These are similar to the rates I saw on SurveyJunkie. On [Amazon](’s Mechanical Turk, a marketplace for work that includes survey taking, [a survey might pay less than 10 cents](. Why would pollsters and researchers use sites like these to collect responses? In some applications like election polls, as Pew noted, opt-in surveys can perform similarly to random probability-based surveys. Which is great, because they are generally much cheaper to conduct. “Lower cost survey pools are great for exploration” and when you don’t need a very precise outcome, said Rothschild. The results are generally faster, cheaper, and more convenient. “Especially for research that’s being done on a close-to-shoestring budget, opt-in online surveys are a natural choice for scholars trying to study diverse aspects of social behavior,” added Thomas Gift, an associate professor of political science at University College London. Gift and another researcher studied the potential of fraudulent responses in online opt-in studies after using an opt-in study themselves to study a separate question. “It was only during the fielding of the experiment that large cohorts of respondents seemed to be giving suspicious answers about their backgrounds,” he said. “So we investigated further.” Why, and when, are online surveys prone to bogus respondents? Researchers can use a lot of tools, including screening questions, to weed out bad responses and end up with a set of usable data. But there are some instances, such as obscure beliefs or surveys where you need really precise data, where opt-in online surveys are going to be a problem. Pew noted a few considerations here: Based on their research over the years, online opt-in polls have a tendency to overestimate fringe beliefs (they gave the example of belief in conspiracy theories). That overrepresentation is more severe among younger respondents and among Hispanic adults, they noted. Gift and his research partner hired a “nationally-recognized” marketing firm — which they left unnamed in their paper for liability reasons — to conduct a survey for them that collected respondents with experience in the Army. This firm, they said, distributed the survey to a number of sub-vendors that provided financial incentives for responses (these sub-vendors were also left anonymous). In order to detect whether respondents really did have experience in the Army or not, Gift used screening questions embedded in the survey. Respondents were asked about saluting protocol, and for specific information on their military background. [Based on their analysis of those screeners](, nearly 82 percent of respondents may have pretended to be associated with the Army in order to take the survey and get paid for it. About 36 percent of those respondents passed the knowledge screening test, but were identified as probably misrepresenting themselves based on their answers to the survey questions themselves. And there was also evidence in the survey results that some respondents were taking the survey a bunch of times, giving nearly identical answers and tweaking their demographic data enough to pass as different people, presumably to get paid multiple times for the same survey. How can researchers minimize bogus responses and end up with useful data from an online survey? Essentially, by testing the respondent. Online surveys use attention checks, IP tracking, anti-bot software, and monitoring the time it takes for someone to complete a survey in order to try to mitigate fraud. Asking respondents questions like the one Pew flagged about having a license to drive a submarine is a pretty good way to tell whether someone is just cruising through and answering questions as quickly as possible, or if they’re actually reading the questions. Nothing is going to catch every single bogus response, and, as Rothschild noted, some low-quality responses will slip through attention checks. There are also other models for collecting data online, Gift noted. Opt-in volunteer surveys “aren’t without their limitations,” but they create a different set of incentives for participants that don’t rely on a financial reward. Gift highlighted the work of the [Harvard Digital Lab for Social Sciences](, an online platform that allows people to volunteer to participate in social science research. While researchers might not be able to catch every single bad response, they can be transparent about how they collected their data, Rothschild noted. And it’s worth looking for that information the next time you see a shocking headline about a shocking belief held by The Youth. —A.W. Ohlheiser, senior technology writer [A mobile phone is displaying the logo of Bitcoin cryptocurrency on its screen next to a keyboard.]( Nikos Pekiaridis/NurPhoto via Getty Images [Bitcoin is ... back?]( [Bitcoin’s surge is fueling crypto optimism.](   [A Canada goose stands on the snow-covered figure of a Tyrannosaurus Rex at Cologne Zoo in Cologne, Germany in January 2024.]( Federico Gambarini/Picture Alliance via Getty Images [Are we in the middle of an extinction panic?]( [How doomsday proclamations about AI echo existential anxieties of the past.](   [In this photo illustration, the Reddit logo is seen in behind a silhouette of a person typing.]( Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images [A poster’s guide to who’s selling your data to train AIÂ]( [Those Tumblr, Reddit, and WordPress posts you never thought would see the light of day? Yep, them too.](    [Learn more about RevenueStripe...](   [Wilder dressed in a purple coat as Wonka sits among oversize lollipops and mushrooms.]( Silver Screen Collection/Getty Images [The less-than-magical Willy Wonka event, briefly explained]( [The viral fiasco in Scotland that made kids cry — and prompted calls to police.](   [Justice Neil Gorsuch, left, in a navy suit and red tie, and Chief Justice John Roberts, right, in a black suit and gray tie, stand in front of the Supreme Court building.]( Win McNamee/Getty Images [The Supreme Court appeared lost in a massive case about free speech online]( [The justices look likely to reinstate Texas and Florida laws that seize control of much of the internet — but not for long.](   Support our work Vox Technology is free for all, thanks in part to financial support from our readers. Will you join them by making a gift today? [Give](   [Listen To This] [Listen to This]( [AI on Trial: Bot-Crossed Lovers]( What happens when an AI chatbot takes part in a crime? Listen to the first episode of a Stay Tuned miniseries, “AI on Trial,” featuring Preet Bharara in conversation with Nita Farahany, professor of law and philosophy at Duke University. [Listen to Apple Podcasts](   [This is cool] ["I WANNA GO TO PANERA BR"](  [Learn more about RevenueStripe...](   [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage your [email preferences]( , or [unsubscribe](param=tech)  to stop receiving emails from Vox Media. View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Washington, DC 20036. Copyright © 2024. All rights reserved.

Marketing emails from vox.com

View More
Sent On

25/05/2024

Sent On

24/05/2024

Sent On

24/05/2024

Sent On

24/05/2024

Sent On

23/05/2024

Sent On

22/05/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.