A blockbuster recent study considered the question of whether “nudging” interventions scale to the state and national levels and found that they did not. “Nudging at Scale: Experimental Evidence from FAFSA Completion Campaigns” conducted a study on two samples (one national, one statewide) covering more than 800,000 students and found “no evidence that different approaches to message framing, delivery, or timing, or access to one-on-one advising affected campaign efficacy.” The research has important implications for behavioral nudging as an approach but also suggests, “While scaling interventions locally is a costlier and more labor-intensive approach to scale, by maintaining a stronger connection to students as recipients, the sustenance of positive impacts could justify greater costs.” That message should be heartening to NCAN members.
Building from research that many NCAN members are familiar with around text message nudging, the authors wanted to investigate whether these types of interventions could scale up “globally” rather than “locally” (defined by the authors as at a level below the statewide level). The authors note, “We currently have, however, limited evidence on whether such interventions maintain such efficacy at scale and what effective pathways to scale might be (i.e., global versus local scale up). Additionally, we have little evidence regarding the specific mechanisms that explains these interventions’ efficacy.”
The study uses students sampled from the Common Application and from a “large state” application portal and randomly assigns these students to a number of different treatments. “Treated students received messages encouraging them to complete the FAFSA early to maximize the financial aid they received,” and the authors “randomly varied the messages along multiple dimensions, including their: behavioral framing; delivery channel (mail, e-mail, or text message); offer of one-on-one advising assistance; and a social nudge to encourage peers to complete the FAFSA as well.” Despite all of these variations in treatment, the finding was the same: “none of these variations ultimately mattered for student outcomes, suggesting that some other distinguishing feature(s) between prior studies and our interventions accounts for positive effects in the more localized interventions in this area.” This was true in both the national and the statewide samples.
The paper posits three suggestions for the null finding:
“Most prior work involved a local partner with closer connections to and knowledge of treated students. Local partners may know something important about their students and such students may react differently to messages from partners they feel are specifically invested in them or their communities.”
“The global scale-up in this study implied messaging content was more generic and less personalized to students than in prior interventions, perhaps resulting in lower salience for students.”
“Current cohorts of students may have better information about FAFSA completion than did previous cohorts, so that there exist fewer students for whom nudge campaigns would make a difference on the margin.”
Interestingly, the authors note that one approach they “cannot rule out as definitively” is “making one-on-one advising available to students.” Although offering students in the Common Application sample individual advising did not cause a positive effect on outcomes, the confidence interval overlaps with “impacts of a similar magnitude that more local interventions have found from text-based nudge approaches” (those studies with which most NCAN members will be familiar).
The authors suggest that “the possibility that large-scale nudging with the option of two-way advising could have a positive impact is broadly consistent with the body of research on nudge interventions in postsecondary education.” But they do not that it is infeasible to test this intervention given that it would require staffing “a sufficient number of advisors to provide meaningful two-way advising to hundreds of thousands of students.”
Given the efficacy prior research has found on text-based nudging at the local level and the failure to find a similar effect globally, the authors hypothesize two reasons localized interventions might be more effective:
“First, if participants infer that an intervention is delivered broadly, the salience and value of the campaign for any one recipient may be diluted. In the context of our study, students presumably had a weaker connection to the Common Application and the Large State partner than they presumably did to smaller, community-based organizations that were the ostensible message sender in many prior interventions.
Second, the messages were primarily generic and one-way, one of the limitations of a global approach to scale, so students may very quickly have concluded that they were receiving the same outreach as many other students. The lack of a more direct relationship with the sender and the generic nature of outreach could both explain our lack of impacts.
This research speaks to, and in some ways confirms, what many NCAN members might already feel or believe: There are few, if any, silver bullets in college access and success work. This may be a matter of perspective, but it may be time for a reconfiguration of how we perceive “scale.” School-, district-, or communitywide implementations of an intervention are certainly conducted at a larger scale than, for example, classroom- or individual-level advising while still maintaining the human connection that is the bedrock of working with students and their families. Findings from this study suggest that moving away from that connection and toward a more impersonal (but still well-meaning) intervention loses something essential to the process.