By Cody Christensen, Doctoral Student, Vanderbilt University, and Jason D. Delisle, Resident Fellow, American Enterprise Institute
Lawmakers have long searched for a better way to measure college quality. Past efforts, such as the federal government’s student loan Cohort Default Rate and Gainful Employment regulations, did this by sanctioning colleges where many students defaulted on their loans or did not earn high salaries relative
to the amount of debt they borrowed.
But these measures have limitations. Judging colleges this way may inadvertently discourage them from enrolling low-income students that are at greater risk of dropout and default. The easiest way for colleges to avoid such risk is by not enrolling risky
students in the first place. And a college that sticks to enrolling the most economically and academically advantaged students will always perform highly on these measures. That helps explain why the higher education community has become fixated on a different, theoretically better, method to judge college quality: economic mobility rates.
Unlike past accountability efforts, a mobility rate can be designed to reward colleges for enrolling large numbers of low-income students and moving them to higher income levels. Economic mobility is appealing because it captures two goals – expanding
access for low-income students, and promoting students’ earning outcomes – in a single metric. Moreover, colleges that mostly enroll students from high-income families who then go on to earn high incomes do not have an advantage. Mobility, by design,
measures the share of low-income students that move to the highest income quintiles after college.
Like prior accountability metrics, however, economic mobility still has a major drawback that has not received much attention. Economic mobility rates, like the measures they are supposed to replace, are also heavily correlated with factors far outside
of a specific college’s control. Thus, rewarding colleges based on their students’ economic mobility may actually be rewarding colleges for circumstances they had nothing to do with. Ignoring this issue creates a skewed view of “college quality” and
risks creating even more unintended consequences.
Geographic location is one factor that is strongly associated with a college generating high levels of economic mobility. Our recent study found that 75 out of the 100 top-mobility colleges in the nation are located in just three states – New York, California, and Texas – despite that these states are home to only 23 percent of the nation’s colleges. Another study found that over half of all high-mobility colleges are located in New England and along the Atlantic coast.
Did the higher education institutions in these areas discover the secret to generating upward mobility? Maybe. But policymakers should also consider the possibility that students who attend these colleges greatly benefit from surrounding demographic and
labor market conditions. Indeed, researchers have shown that variations across geographic areas
– namely, the amount of income inequality in a given region – greatly influence the degree to which colleges appear to be “successful” at generating economic mobility relative to other colleges.
This dynamic may explain why New York City has dozens of colleges with high economic mobility rates, while states like Utah or Iowa have none. Colleges with the highest mobility rates are typically located in urban areas with robust labor markets. Therefore,
their students have greater access to high-paying jobs in the surrounding area, and thus, may be more likely to climb the income ladder than students who attended colleges in other settings.
Climbing the income ladder is still a good outcome per se, but it may not be because the college has developed a winning mobility formula. Put another way, students often climb the income ladder as a result of attending college, and higher education institutions
deserve credit for helping promote these valuable outcomes. But different levels of economic mobility among different colleges do not imply that one college’s efforts are necessarily superior to another’s – in reality, students’ economic outcomes
are influenced by a variety of external factors.
Advocates and journalists rarely discuss these limitations, which has only accelerated the momentum behind the craze to measure economic mobility. The excitement has grown in recent years, in large part because of new economic mobility data published in 2017 by Harvard economist Raj Chetty along with a team of researchers. Now, lawmakers have rushed to propose legislation that would incorporate economic mobility
into the federal higher education accountability scheme. Popular college rankings, including U.S. News & World Report’s, now include “social mobility” categories in their scoring methodologies.
All of these actions are well intentioned. But they disregard the fact that not all colleges begin from the same starting point. In many cases, colleges are limited by the types of students that apply or by the economic conditions of their surrounding
area. Efforts that link economic mobility to high-stakes federal policies seem likely to repeat the same mistakes of past accountability metrics unless policymakers ensure they are truly measuring what they intended to measure.