Do research incentives actually increase participation?
By Luke Strathmann|7 min read|Updated Feb 1, 2023
From our reading of existing literature, modest cash prepaid incentives are consistently the most effective form of incentive.
Post-payments for returning a questionnaire, or lotteries that promise a potential reward later, have also had small positive effects, or larger effects when the payment amount is significant and salient.
Studies that have paired pre-and post-survey incentives have also shown success: prepayment builds legitimacy with potential survey participants, and then follow-up contact (with additional incentives) signal why the survey is important to the user or organization.
Throughout this post, we explore the rich body of research that surfaced these conclusions, and highlight rigorous studies about which types of research incentives are most effective.
We consider types of incentives (e.g., money, gifts or prepaid cards, charity, lotteries, etc.), as well as types of research (consumer or market research, user experience (UX) research, university research, clinical research) all with one question in mind:
With countless ways to structure incentive programs to attract and engage research participants, what does rigorous academic research say about what’s best?
Table of contents
Prepaid incentives are more effective at increasing response rates than promised ones
Combining pre- and post-payments are also effective at increasing participation
Lotteries are a popular incentive as well, and consistently increase research participation
Money, gifts, or prepaid cards, donations to charity, or lotteries all increase study participation
Incentives play a huge role in boosting research participation and quality
We’ve blogged about the full suite of research incentive possibilities here: “Incentive programs — 21 ways to incentivize employees, customers and participants,” and in general, they take two forms:
Monetary incentives (e.g., cash, gift cards, prepaid cards (like Visa® prepaid cards), a donation to a charity); or
Non-monetary incentives (e.g., swag, gifts, travel, merchandise, experiences, promotions, professional development).
For each of these types, payouts are generally structured as one of the following:
Guaranteed Pre-payments: e.g., offering the incentives up front as a way to recruit people into your survey, or to establish trust with participants.
Guaranteed Post-payments: e.g., as a reward for completing a survey.
Lotteries: e.g., offering a chance of winning a large payment or prize
Across the board, incentives improve response rates
Numerous research studies and meta-analyses confirm that incentives are an effective tool to increase research study response rates, participation, and response quality.
For example, in Singer and Ye’s (2013) systematic review on the use and effect of survey incentives, which has been cited over 600 times, they conclude that incentives increase response rates to all modes of surveys. They find that:
Monetary incentives increase response rates more than gifts, and prepaid incentives increase them more than promised incentives or lotteries.
They argue that there is no good evidence for how large an incentive should be. In general, however, response rates increase as the size of the incentive increases, but do so at a declining rate.
They note that incentives had significantly greater effects in studies where the response rate without an incentive was low. I.e., it’s helpful to try to target incentives to sample members who otherwise would not respond.
In another review, Göritz (2006) looked at 32 experiments that investigated the impact of incentives on response rates in web surveys, with a total sample of over 210,000 people.
Across all of these studies, material incentives promoted both survey response and retention.
After pooling the studies together, her analysis suggests that material incentives increase the odds of a person responding by 19 percent over surveys with no incentives.
Prepaid incentives are more effective at increasing response rates than promised ones
For example, in a study on increasing response rates to web surveys, Bretschi, Schaurer, and Dillman (2021) highlight several theories for why pre-payments are effective.
Social exchange theory suggests that coupling the survey invitation with pre-paid incentives increases response rates by:
Drawing attention to the request;
Establishing trust with the participants about the intention of the study; and
By triggering a sense of reciprocity.
As part of a chapter in the book “Survey Methodology,” (2020), Professor Don Dillman, a leading expert in the field, reviewed a large body of incentives research and concluded that small incentives are most effective as monetary payments (e.g., cash) provided with the survey request (prepaid).
While the research is more mixed, he also notes that small post-survey payments, or making a donation to a charity of choice, can have positive effects.
Large post-survey payments, either guaranteed or through lotteries, are also likely to increase response rates, though the exact amount is largely context specific (e.g., What does your sample population look like? What would a meaningfully large incentive look like for that group?).
Combining pre- and post-payments are also effective at increasing participation
For example, after six waves of experiments, with more than 5,000 participants in each wave, researchers settled on a structure that was most effective: a $5 or $10 pre-survey payment, along with $20 upon receipt of a completed survey for those who didn’t originally respond.
To gain this insight, from 2014 through 2021, the Federal Housing Finance Agency (FHFA) and the Consumer Financial Protection Bureau (CFPB) mailed close to 175,000 surveys to mortgage owners.
At first, approximately one third of the overall survey responses were completed, but that share is approaching 50 percent in recent waves.
They experimented with incentive structures that varied the amount (between $5 and $30) and the structure (pre- vs. post-payment rewards, and combined rewards).
Lotteries are a popular incentive as well, and consistently increase research participation
Lotteries were common in the Roman Empire, and are exhibited throughout the Bible (Jesus’ garments were divided via casting of lots).
An early version of keno dates to the Han dynasty and helped pay for the Great Wall of China, and today, widely popular US state lotteries use revenues to fund state budgets.
More recently, in an experimental study published in the International Journal of Market Research, Göritz and Luthe (2013) found that lotteries in commercial online panels enhance participation, and that the odds of participating with a lottery are 18% larger than without a lottery.
Their analysis suggests that for each increase in €10 EUR of prize size, the odds of participation rose by 2%. They also found that higher total lottery payouts lowered item non-response.
Their results indicate that participants valued a single prize size more than the total payout or odds of winning, and participation was higher if the payout was raffled in a lump sum instead of split into several smaller prizes.
This suggests structuring lotteries as a few large prizes, rather than say, 100 smaller ones.
State and national lotteries run on similar logic.
For example, in a 2022 New Yorker article about lotteries, they note that “the difference between one-in-three-million odds and one-in-three-hundred-million odds didn’t matter, but the difference between a three-million-dollar jackpot and a three-hundred-million-dollar jackpot mattered enormously.”
In other words, if you’re able to afford a larger and more salient amount, the odds of bringing more people into your survey may increase dramatically.
Other studies have also found positive effects of lotteries. For example, in Göritz and Wolff (2007), they found that a lottery of gift certificates increased response rates in longitudinal online panel studies.
In a study by Zhang, Lonn, and Teasley (2016), they conducted a web survey experiment where they altered the salience of the lottery incentive, and found that the emails that highlighted the lottery in the subject (“Win $50 Gift Certificates for Amazon”) and body (“Don’t miss out on your chance to win!”) raised the response rate by 5 percentage points, relative to emails that emphasized the value of the survey itself.
Lotteries often have several benefits over other rewards
For example, they are easy to implement and there are only a few winners, so costs are generally considerably lower than other forms of incentives.
As participation rates go up, the more cost efficient a lottery becomes in comparison to a per capita reward.
Money, gifts or prepaid cards, donations to charity, or lotteries all increase study participation
However, no study that we know of has compared these many types of incentives to one another in the same program.
For example, to settle the question of which type of incentive is most effective, researchers would need to randomly offer different participants either a small payment, a large payment, a lottery, a prepaid card, a donation to charity, or nothing, and then compare the participation rates.
There are studies that support each of these types of incentives individually, but it’s hard to tell from the research alone, absent context of a specific research setting, whether there’s a precise formula for which type of incentive to use, and how much.
However, this type of research does exist in other realms, and researchers and policymakers have long used fixed and lottery based incentives as a way to encourage behavior modification.
For example, proponents of behavioral incentives have shown that monetary incentives can be helpful in getting people to study, exercise more, or get vaccinated (e.g, see Uri Gneezy, Stephan Meier, and Pedro Rey-Biel (2011)).
In a study on performance based incentives for educational achievements by Levitt, List, and Sadoff (2016), they compare the relative effectiveness of two incentive structures (fixed rate and lottery).
For meeting a monthly educational goal, they offered either $50 or a 10% chance to win $500, and found that both types of incentives were similarly effective and had lasting long-term returns.
Given that research participation is a much less complex decision, we find it encouraging that incentives have been shown to be effective in so many different settings, including complex behavioral decisions.
Final questions for consideration
Through our careful review of the literature, we also think it’s important to consider the following:
How well do you know your research participants? Loyal customers or participants are likely willing to accept smaller or no payments. Targeting specific types of incentives, or different amounts, to different types of participants (e.g., new customers, or those unlikely to respond), could be a useful tool for maximizing participation and response quality.
What is the goal of your research? Are you trying to get as many people as possible? Or do you value a smaller sample, with more in depth feedback? The structure of your research can help define your incentive.
What other factors might affect your participation rate? There is a ton of research on optimal survey design, and many factors that affect participation. Even if you offer an incentive, it’s important to consider other survey features such as targeting, communication, timing/length, question design, format, mode, interviewer characteristics, and survey firm legitimacy. In some cases, these factors could affect your participation rates more than any type of incentive.
Will offering an incentive bring in the wrong type of participant? There is some concern that paying people may actually decrease data quality, because people focus on getting through the study as quickly as possible to get the payment. However, researchers looked into this, in a study that included hundreds of thousands of student respondents from over 600 colleges and universities, and they found little evidence that survey incentives negatively affect data quality. In fact, the research found that incentive respondents actually had better data quality than those respondents who didn’t receive an incentive.
Published January 25, 2023
Updated February 1, 2023