Eval Rev. 2026 Jan 23:193841X261416558. doi: 10.1177/0193841X261416558. Online ahead of print.
ABSTRACT
Many evaluations of human services programs rely heavily on survey follow-up to collect outcome data from randomized controlled trials. Evidence clearinghouses will give their highest grade of evidence quality to such evaluations if attrition is low. Declining survey response rates, particularly in the control group, are making it more difficult for evaluators to meet this clearinghouse standard. Hendra and Hill (2019) suggested that it might be better to settle for substantially lower response rates, finding that the pursuit of high target response rates adds time and money while reducing nonresponse bias little if at all. This paper reports on a new approach to boosting response rates that allows reluctant respondents to complete a shortened version of the data collection instrument. Although the method is effective at increasing the response rate and costs less than traditional approaches for boosting response rates, this paper suggests that the data collected from reluctant respondents with the new method probably did not substantially reduce nonresponse bias on impacts, a finding broadly consistent with that of Hendra and Hill. On the other hand, the method did find some important differences in the means of some outcomes within the study arms, a finding that is could be important for descriptive surveys.
PMID:41574576 | DOI:10.1177/0193841X261416558