What is a Good Response Rate for Surveys?

"How can we get our survey response rates up?"

I'm asked this question a lot.  Everyone who conducts surveys wants more people to take their surveys.  But people, for the most part, don't seem to get excited about all the surveys we have for them.  With the ease of which a web survey can be created and implemented, more and more surveys are being launched.  And the more survey requests someone gets, the less likely it is that they will comply.  So response rates have been falling for years.  

There is an assumption that the higher the response rate is to the survey, the more accurate the results will be.  Oftentimes the first question I will get about a survey is about the response rate, because people have been conditioned to connect a low response rate with inaccurate results. This is most often used when someone does not like the results of the survey.  A low response rate lets people dismiss findings that they don't want to hear without actually having to deal with those findings. 

But (shhhh) nobody really knows what a good response rate is. 

I recently, however, came across a paper that addressed this in a great way.

"How Important are High Response Rates for College Surveys?" is a paper by Kevin Fosnacht, Shimon Sarraf, Elijah Howe, and Leah K. Peck all at Indiana University Bloomington that was published in The Review of Higher Education in the winter of 2017 (I've linked to a version of the paper you can access without a membership in any particular library source).  It is one of the best treatments of this issue I've read.

OK.  Now, think about web surveys.  You get an email asking you to take a survey.  Maybe you do that (thank you!), or maybe the email sits in your inbox and gets lower and lower until you forget about it.  A few days later you get another email asking you to take that survey.  But you are busy brushing your cat and there is hair everywhere and then after you clean that up there is that show you've been binge watching, and again you don't take the survey.  A few days later, there it is again, another reminder to take that survey.  But this time you aren't terribly busy with something else and it sounds kind of interesting, so you click on the link and take that fascinating survey.  

You've just gone from a non-respondent to a respondent.  If I hadn't sent you that second reminder (you knew it was me sending you that survey, didn't you? It's kind of what I do), you would not have taken it.  There is a direct relationship between the effort the researchers take to get people to respond (e.g., number of emails, offering incentives, leaving the survey open for longer times, etc.) and how many do respond. As the person administering the web survey, I also know when you completed it.  So I can tell if you did it right away after that first email (low effort), or if you only did it because I emailed you three times (high effort).

We can compare the results of the early respondents with those who are later respondents.  So, what are the results when we only had, say, 30% of the respondents answer?  Let's add in the next wave of respondents and see if the results change when we have 50% answering. This tells us if the results change (and maybe get more accurate?)  as a function of the response rates.  

The authors of this particular study took a large database of responses from multiple institutions to the National Survey of Student Engagement (NSSE) to test this and, using the timing of the response, simulated surveys with various different response rates.  They could then see what happened if a survey had, say a 10% versus a 90% response rate.  Which is very cool.  

And what they found was that response rate didn't really matter much.

The reliability of the results was pretty good whether you used a 5% response rate or a 75% response rate.  In fact, what mattered more was how many respondents you had, with smaller numbers being less reliable than larger numbers.  Their study showed that reliable data could be obtained with a sampling frame of 500 even when the response rate was as low as 5% to 10%.  Obviously the numbers of respondents that you get should also be influenced by the types of analyses you want to conduct, especially if you want to look at subgroups of students (for instance, men versus women).  

One study should not overly influence our practices, but the authors also provide evidence of similar findings from other researchers.  I definitely recommend that you read their paper, as there as nuances in there that a blog cannot capture.  But perhaps we are seeing a move away from the tyranny of the response rate.   

 

Previous
Previous

The Many Paths to an Education

Next
Next

More on Parental Communication in the First Year of College