By Bill DeBaun, Senior Director of Data and Strategic Initiatives
Reading time: Seven minutes
National College Attainment Network (NCAN) staff members field inquiries every day from member organizations (and organizations who aren’t members yet but should strongly consider it). The answers to these questions are often more relevant to a broader
audience and are worth sharing. Below are some questions that have popped into my inbox over the past month. They’ve been anonymized and modified for length and clarity. My responses appear below the question. Have a burning question you’d like to
see answered? Email me at debaunb@ncan.org and I will do my best to get you the information you need!
Q: Has there been a change to the timeline for calculation of graduation rates for degree granting institutions from six to eight years? This is on the US Department of Education's site:
Graduation Rate
Graduation rate is presented differently for degree granting and non-degree granting schools.
The graduation rate for degree granting schools is the proportion of entering students that graduated at this school within eight years of entry, regardless of their full-time/part-time status or prior postsecondary experience. Graduation is measured eight years after entry, irrespective of the award sought or award obtained.
The graduation rate for non-degree granting schools is the proportion of full-time, first-time students that have graduated at the same school where they started college within 150% of normal program completion time (e.g., within six years for a bachelor's degree, or within 15 months for a 10-month certificate).
A: The above is from the Glossary for the College Scorecard, and getting an answer to the question above requires a nuanced understanding of the Integrated Postsecondary
Education Data System (IPEDS), whose data populates much of the Scorecard.
There are three different IPEDS surveys that consider students’ postsecondary completion outcomes. The oldest one is the Graduation Rates (GR) survey, which has been around since the late 1990s. The middle child is the Graduation Rates 200% (GR200) survey,
which debuted in the 2009-10 administration. The newest one is the Outcome Measures (OM) survey, which debuted in the mid-2010s. There are some key differences between the three surveys, but the biggest one is that the GR and GR200 surveys only consider the outcomes of first-time, full-time (FTFT) students, while the OM survey considers “Pell Grant Recipient Status x First-Time/Non-First-Time Status x Full-Time/Part-Time Status.”
The GR survey looks at 100% and 150% of “normal time” graduation rates, that is, completion in two or three years, respectively, for associate degrees, and completion in four or six years, respectively for bachelor’s degrees. The GR survey at 150% of
time is the only one of the three that breaks students’ outcomes out by race, ethnicity, and type of degree sought. The GR200 survey only looks at completion in 200% of normal time with no subgroups. The challenge with only considering FTFT students,
as the GR and GR200 surveys do, is that many, many students are neither first-time nor full-time. So the OM survey gives a more nuanced view of completion outcomes for students who are not FTFT but at the cost of not reporting outcomes by any other
demographic characteristics.
A more detailed table appears in Exhibit Three on this page from IPEDS.
Back to the original question: there hasn’t been a change in the timeline for graduation, per se, we just have more options via IPEDS and need to be careful about understanding the data source. The College Scorecard is using the OM survey for all degree-granting
schools and the GR survey for schools with only sub-associate programs. Anywhere you see “part-time” or “returning” as options, you can be sure those data are coming from the Outcome Measures survey whereas anywhere you see data disaggregated by race,
ethnicity, or gender they are coming from the Graduation Rates survey. If you’re still with me on this explanation, let me just add that it would be really helpful if the data dictionary for the Scorecard provided more information on data sources
beyond just “IPEDS” given the differences in the surveys.
Q: In the wake of the Harvard and UNC Supreme Court decisions, do you know of any pre- and post-data for application/enrollment for states that banned race-based affirmative action?
A: Miriam Greenberg, the Senior Director of the Strategic Data Project at Harvard University’s Center for Education Policy Research sent out a very through email about this very topic based on a blog post from Dr. Katharine Meyer at the Brown Center on Education Policy at the Brookings
Institution. The summary reads in part:
Research that examines statewide affirmative action bans in Arizona, Michigan, Nebraska, and Oklahoma. These states experienced a slight
decline in the enrollment of underrepresented racial-ethnic minority (URM) students following the bans at public four-year institutions and state public flagship universities.
Another study of bans in California, Texas, Washington, and Florida in the 1990s. Researchers found that, “persistent declines in the share
of underrepresented minorities among students admitted to and enrolling in public flagship universities in these states."
In this study, a researcher estimates the effects of affirmative action bans on college enrollment, educational
attainment and demographic composition by exploiting time and state variation. He finds that the bans decrease URM enrollment and increase White enrollment at selective colleges.
A researcher found that after California's ban on affirmative action (Prop 209), there was a shift of URM students towards lower-quality undergraduate
programs, resulting in decreased graduation rates in STEM fields for URM students and an approximate 5% decline in their early career incomes, with Latino/a students experiencing the most significant declines.
Q: I'm working with a group of community colleges who are focusing on supports for rural male students in community colleges. I'm on the lookout for some surveys/assessments that we can incorporate into the design, especially those that touch on sense of connection to school and career aspirations.
A: I wish I had a better, or at least more complete, response for you here, but I’m going to give it the old college try (pun, decidedly, intended). For your specific inquiry, you might consider the National Survey of Student Engagement (NSSE). NCAN has fielded enough inquiries about surveys over the years, most recently specifically about model senior exit surveys for a K-12 environment, that we might consider hosting a repository
of open source/fair use surveys for members to access. NCAN has hundreds of members across the country, and it often feels like they’re re-inventing the wheel constantly with survey development. It would be great if we had a repository of items or
instruments that members could draw from. This repository would still fall short of being validated or polished in the way, say, the RAND Corporation’s surveys are, but it would still probably be a level-up for most members.
All of the above said, if you’re reading this and have an idea about a good survey or assessment that would fit the above use case, please contact me at debaunb@ncan.org and I’ll connect you with the
question-asker above.
Q: I am proposing a study to show that a fit and match curriculum and intentional fit and match counseling will increase on-time graduation rates. I found this article, which is a wealth of great information, and I wanted to see if you knew of a curriculum in place, maybe that College Advising Corps use to facilitate their fit and match strategy. If you aren't aware of anything like that, do you have a suggestion of someone or an organization I can reach out to? I can certainly use the information gleaned from this article but didn't know if some specific lessons existed.
A: For as much as the field talks about the principles of fit and match as being important to advising, a spin through Google Scholar and some other key college access and attainment literature reviews shows precious little in terms of
connecting it to students’ postsecondary outcomes – so this study would be a welcome addition to the literature! The scarcity is maybe because “fit” as a concept is difficult to measure and in general there aren’t many organizations or systems focusing
on “fit and match” at scale. This is another item I would throw out to the member audience and readers more broadly. My first thought was around the KIPP Network and this College & Career Match playbook. It’s the resource that turned up that was most responsive to the original ask. Like the question above, though, if you have a resource that you think the question-asker above should see, please reach out to me at
debaunb@ncan.org, and I will happily make a match.
That’s it for this edition. Join us next time as we open up the NCAN Mailbag and connect members with the answers they need!