In a recent report, we described changes in student-level net costs at Virginia’s public colleges and universities and their effects on student outcomes, particularly for the poorest individuals. In our most robust analysis, for example, we found that a $400 decrease in net cost for Pell-eligible students caused a 5.9-percentage-point increase in the rate at which those students stayed for a second year of college.

I use the word “caused” carefully – in the social sciences, to say that one thing causes another requires a methodology that rules out other possible causes. Many researchers, because of data limitations, cannot make such claims (or at least, should not). Fortunately, we were able to do so in our Virginia study.

In the absence of an experimental research design, it is hard to say with certainty that a particular change in the net cost of attendance causes a change in student outcomes. How do we know that the students experiencing changes in net costs are not systematically different from those that are not affected by such changes? Standard regression analysis can control for observed student differences in things like academic background and socioeconomic status, but there may still be unobserved differences that bias the results. We can never be sure that we’ve accounted for every other factor that may possibly explain the differences in outcomes, and so we are left with a correlation, not a cause.

Our report addresses these concerns without the need for a randomized, controlled experiment. How? By taking advantage of a shock in external conditions associated only with changing net costs. This shock effectively creates a natural experiment whereby net costs for a particular group of students are altered while every other condition remains unchanged.

In our report, this external “shock” was an increase between 2008 and 2009 in the federal income threshold required to automatically qualify for a zero expected family contribution (EFC), from $20,000/year to $30,000/year – the largest single-year increase since 2001. This resulted in a large number of students qualifying for the maximum Pell Grant in 2009 who would otherwise have received lower awards and, as a result, experienced higher net costs. We then compared the difference in outcomes before and after this policy change for a group of students affected by the intervention and for another group unaffected by the intervention but nearly identical in all other aspects to the first group (hence the name for this methodology: difference-in-differences).

Our “treatment” group consisted of students with family incomes between $20,000 and $30,000 and the “control” group consisted of students with incomes between $30,001 and $35,000 (who thus barely missed the cutoff). As a result, any relative change in the first-year retention rate of students in the treatment group can be attributed to the policy change.

Indeed, this policy (which decreased net costs by an average of $400 for those students in the treatment group) caused a statistically significant increase of 5.9 percentage points in the probability of first-year retention. This effect was substantially larger than what we observed in our conventional regression models, where we found that an increase in net costs of $1000 was associated with a decrease in the first-year retention rate by somewhat less than 1 percentage point.

What can we conclude from this exercise? First, it is clear that poorer students do respond to even small changes in the amount of financial aid they receive. Second, students are responsive not only to direct increases in need-based grants, but also to policies that indirectly affect financial aid awards like the EFC change; in fact, it may be easier for policymakers to find agreement on such indirect changes. As a result, the difference-in-differences methodology reveals that traditional regression methodologies may underestimate the “real” impact that a change in net costs has on student outcomes. Understanding the true effect becomes even important in light of recent increases in these costs – averaging nearly $2,000 between 2007 and 2012 for students in the lowest income quintile – that make it more challenging for students to succeed in Virginia’s public four-year institutions.