Survey Delivery and Participation
Choosing Your Platform
Digital
A few popular options available for digitized surveys are SurveyMonkey, TypeForm, and Qualtrics.
Strengths | Weaknesses |
Allows for a flexible window of availability allowing members within your target population the opportunity to participate within their own timeframe. Incurs minimal expenses for basic survey software and no physical supplies are needed. Regarded as the most efficient way to conduct a survey of a large population sample (Uhlig 2014). |
Have been found to have lower response rates. Requires respondents to have access to internet. |
Analog
Surveys are often administered in hard copy or on paper.
Strengths | Weaknesses |
Response rates are almost double those from digital surveys on average (Weigold 2019). |
Requires significant planning to coordinate the distribution and retrieval of the surveys. Potentially high costs related to printing. |
Response Burden
A major factor to consider when choosing a survey type is the response burden you are placing on the respondents with each survey. Response burden is the toll that answering the survey takes on the respondent either mentally or, more practically, in the form of time spent. The longer a respondent must spend reading, processing, and responding to questions, the greater the response burden (Rolstad 2011). If participants deem the response burden too high they may rush, skip questions, or choose not to participate—all of which can impair your data.
You can reduce the response burden by reducing the number of questions, shortening the question, shortening the response length requested, and other methods. The overall length of your survey should be based on the research question and who your population is.
Social Desirability Bias
All respondents present innate bias. Specifically, if a response option on your survey is something that an individual will not want to admit to (e.g., if your survey asks about cheating or a belief that is not socially acceptable), the data will likely be skewed toward the socially desirable answer. This can be somewhat mitigated by making the survey anonymous. However, if you wish to do a longitudinal survey, you will need unique identifiers that can associate each survey to the same respondent without identifying the respondent to you.
Question Length
Survey questions should be short and succinct to minimize any comprehension confusion and shorten the overall length of time required for a participant to complete your survey. The more respondents must read, the more likely they are to lose comprehension of what the question is asking. In addition, the longer a question is, the longer a survey will take to complete, which increases the respondent’s response burden (see Survey Design Guide).
With these goals in mind, the recommended length for a question is approximately 16-20 words. Data quality may be improved by prefacing a grouping of similarly focused questions with a short introductory paragraph to provide context and improve respondent understanding (Lietz 2010).
Response Rates
Incentives
Incentives have been shown to increase responsiveness to a survey, especially in younger participants (Murdoch 2014). The incentives need not require completion; a recent study found that an immediate pre-issued $2 cash incentive was shown to nearly triple the response rate in comparison to the promise of upwards of a $10 gift card upon survey completion (Smith 2019).
Optionality
“It is better to collect fewer questionnaires with good quality responses than high numbers of questionnaires that are inaccurate or incomplete” (Boynton 2004).
Especially in an academic setting, it is not uncommon for an incentive to be withheld from respondents until a set criterion is reached (e.g., 100% response rate from a class before extra credit is given). If respondents feel pressured by their peers to complete a feedback form that they do not want to complete, the responses may be counterproductive as they may select the same option the whole way through to quickly meet the requirement. If you elect to enact this route, consider adding internal validity standard questions (see guide on Analyzing Survey Results) to identify respondents whose data may not be valid.
Pilot Surveys
Rarely is anything ever flawless the first time around, and surveys are no exception. Before launching your survey to your target sample pool for final data collection, consider putting your survey through a trial run in the form of a pilot survey. A pilot survey is a small-scale test survey designed to give you information on the performance and design of the survey itself rather than to gather data for your research question. These surveys can also be used to provide insight for the selection of closed-ended questions. The small scale of the pilot survey simplifies the administration, coding, and analysis of open-ended questions. This enables you to glean the insight provided by open-ended questions and use that knowledge to better design a closed-ended survey for your larger population. Note that a pilot survey should always be given to a group that is representative of your sample population.
References
Boynton, P. M. (2004). Administering, analyzing, and reporting your questionnaire. BMJ (Clinical Research Ed.), 328(7452), 1372–1375. https://doi.org/10.1136/bmj.328.7452.1372
Bunce, D. M., & Cole, R. S. (Eds.). (2008). Nuts and Bolts of Chemical Education Research (Vol. 976). American Chemical Society. https://doi.org/10.1021/bk-2008-0976
DeCarlo, M. (2018). Types of surveys. Scientific Inquiry in Social Work. https://pressbooks.pub/scientificinquiryinsocialwork/chapter/11-3-types-of-surveys/
Lietz, P. (2010). Research into Questionnaire Design: A Summary of the Literature. International Journal of Market Research, 52(2), 249–272. https://doi.org/10.2501/S147078530920120X
Murdoch, Maureen, et al. “Impact of Different Privacy Conditions and Incentives on Survey Response Rate, Participant Representativeness, and Disclosure of Sensitive Information: A Randomized Controlled Trial.” BMC Medical Research Methodology, vol. 14, July 2014, p. 90. PubMed Central, https://doi.org/10.1186/1471-2288-14-90.
Porter, S. R., Whitcomb, M. E., & Weitzer, W. H. (2004). Multiple surveys of students and survey fatigue. New Directions for Institutional Research, 2004(121), 63–73. https://doi.org/10.1002/ir.101
Rolstad, S., Adler, J., & Rydén, A. (2011). Response Burden and Questionnaire Length: Is Shorter Better? A Review and Meta-analysis. Value in Health, 14(8), 1101–1108. https://doi.org/10.1016/j.jval.2011.06.003
Smith, M. G., Witte, M., Rocha, S., & Basner, M. (2019). Effectiveness of incentives and follow-up on increasing survey response rates and participation in field studies. BMC Medical Research Methodology, 19(1), 230. https://doi.org/10.1186/s12874-019-0868-8
Uhlig, C. E., Seitz, B., Eter, N., Promesberger, J., & Busse, H. (2014). Efficiencies of Internet-Based Digital and Paper-Based Scientific Surveys and the Estimated Costs and Time for Different-Sized Cohorts. PLOS ONE, 9(10), e108441. https://doi.org/10.1371/journal.pone.0108441
Weigold, A., Weigold, I. K., & Natera, S. N. (2019). Response Rates for Surveys Completed With Paper-and-Pencil and Computers: Using Meta-Analysis to Assess Equivalence. Social Science Computer Review, 37(5), 649–668. https://doi.org/10.1177/0894439318783435
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.