Being selected for FAFSA verification decreased students’ likelihood of immediately enrolling in college the summer or fall following high school graduation by 3.8 percentage points. For students underrepresented in postsecondary education, verification’s
effect was even more deleterious and reduced the likelihood of enrollment by 5.8 percentage points.
Additionally, students who waited longer in the FAFSA cycle to file and were selected for verification saw their enrollment impacted more negatively.
These are some of the findings from new research from Jason C. Lee, Madison Dell, Manuel S. González Canché, Alex Monday, and Amanda Klafehn.
Buckle up, readers. This one is going to have some methodology in it.
The researchers rely on a quasi-experimental study design that attempts to control for a wide array of student characteristics. Holding those characteristics constant except for being selected for verification, the team’s findings can estimate
verification’s effect on enrollment.
Those characteristics go beyond the typical race, ethnicity, gender, and first-generation status and add in family size and composition, academic characteristics (ACT, GPA, and class rank), postsecondary aspirations, and self-identified needs in college.
These data take advantage of data from multiple classes of Tennessee public high school graduates “who filed the FAFSA and took the ACT exam at least once.” The researchers examined those students’ postsecondary enrollment outcomes using data from the
National Student Clearinghouse.
After balancing the groups of students who were (treatment group) and weren’t (control group) selected for verification, the researchers noted that students in the treatment group were much more likely to be Black and first-generation. The treatment group
was also more likely to be on the lower end of the high school GPA distribution and has a composite ACT score that was 1.5 points lower than that of the control group.
The study then looks at the effect of being selected for verification for students who complete their FAFSA in the first, second, and third months of the FAFSA cycle and beyond. Although overall the effect of being selected for verification was the aforementioned
3.8 percentage point drop in enrollment, students who completed their FAFSAs in the first month saw their enrollment decrease just 2.6 percentage points. Students in the second (-5.2 p.p.) and third (-5.3 p.p.) months saw much steeper drops, while
students completing beyond the third month of the cycle saw their enrollment likelihood drop 6.4 points.
The authors hypothesize that the timing’s effect “may be attributable to the reduced time to provide the additional documentation required for verification, or it may be that students who file later are less certain about their postsecondary plans and,
thus, are more negatively impacted by another impediment in the college access pipeline.”
The second look the researchers took was to examine not just the timing of FAFSA completion but also to take into account the student characteristics mentioned above. They created an index based on demographic characteristics and then split students up
into quintiles of how likely it was that they would enroll at all based on those characteristics. Students in the lowest two quintiles of that index were much less likely to enroll (-5.8 p.p.; -6.2 p.p.) if they were selected for verification.
That the timing of verification matters in addition to being selected at all is something of which this paper provides the first evidence, as far as I’ve seen. It also provides some backing to the field’s rule of thumb that “the earlier the better”
for FAFSA completion.
It has been a big few months for news about FAFSA verification. First, The Washington Post published research that showed students from predominantly
Black and Hispanic ZIP codes were respectively 1.8 and 1.4 times more likely than predominantly White ZIP codes to be selected for verification. Additionally, NCAN recently published new data from Federal Student Aid that showed a verification melt estimate of 7.2%, which caused a significant downward revision of our previous verification melt figure.
The new data above from Tennessee don’t necessarily conflict with the FSA figure above. The 7.2% FSA identified is a decreased likelihood of receiving subsidized aid (Pell Grant or Subsidized Direct Loan) not of enrolling at all.
It’s unclear how the external validity of this new research holds up given that its data come only from one state. The authors use a rigorous, high-quality method and data from a number of years, but even they suggest “future research extend our analyses
to additional states and student populations.”
In any case, the new data coming out on verification from multiple sources are a welcome sight after years of the process serving as a black box for advocates, students, and their families. These new reports fill in gaps in our understanding of how going
through verification adversely impacts students, especially those whose postsecondary pathways are most fragile to begin with. With these data in hand, we can continue to advocate for changes that will mitigate verification’s effects.