Last October, the federal Department of Education announced the launch of Educational Quality through Innovative Partnerships (EQUIP), a pilot program inviting partnerships between non-traditional education providers and accredited institutions of higher education. A key component of the program is its target population: low- and moderate-income students. Under a provision of the Higher Education Act, accredited institutions are ineligible to receive federal financial aid for programs in which 50 percent or more of the content and instruction is provided by a non-accredited party. But through the Experimental Sites Initiative, which “tests the effectiveness of statutory and regulatory flexibility for participating institutions disbursing Title IV student aid,” the provision will be waived, permitting low- and moderate-income students to access federal financial aid, like Pell Grants, to enroll in non-traditional programs, including coding bootcamps, online programs, and short-term certificate programs.

EQUIP is also experimental in the way the Department plans to assess the program’s impact on student outcomes. Rather than employ a singular agency to oversee the entire program, the Department has required that each pairing “be reviewed and monitored by an independent, third-party quality assurance entity (QAE).” (The eight partnerships between the institution, non-traditional provider, and QAE were revealed by the Department in mid-August 2016.)

While the experiment is not exclusively focused on increasing access to coding bootcamps, that appears to be a priority (four of the eight partnerships involve bootcamps). Despite questions concerning the accuracy of outcomes advertised by some bootcamp providers, industry-wide averages are impressive. According to Course Report’s 2015 survey of coding bootcamp alumni, 66 percent of graduates reported “being employed in a full-time job requiring the skills learned at bootcamp, with an average salary increase of 38%.” The Flatiron School, a New York City-based coding school that had Moody’s independently evaluate the outcomes of its students, reported a 99 percent graduation rate in 2015 and of job-seeking graduates, 95 percent accepted a job within 120 days.

Enrolling in a coding bootcamp is expensive: the average price of a full-time eleven-week course is more than $11,000. By providing low-income students access to federal financial aid for such courses, the Department is lowering the financial barrier that low-income students face so that they, too, can attend such programs. And depending on the program, students—assuming they pass—can earn a certificate or credit towards an associate or bachelor’s degree from the accredited partner, something coding bootcamps could not offer without such a partnership.

There are, however, reasons for concern. First, the students who apply and enroll in coding bootcamps are a self-selecting group not representative of the average college student. The Flatiron School, for instance, accepts only 6 percent of applicants and accepted students must complete 150 hours of coursework before they even step into a classroom. More generally, “the typical attendee is 31 years old, has 7.6 years of work experience, [and] has at least a Bachelor’s degree.” While the outcomes of bootcamps have been mostly positive, it should not be assumed that students from very different backgrounds will have similar outcomes.

While the eight providers selected to participate in EQUIP have been carefully vetted and selected, that may not be the case in a scaled up version of EQUIP. The program could enable a new crop of providers to take advantage of impressionable students with access to federal financial aid dollars. From this perspective, there is a lot riding on the quality assurance aspect of the EQUIP experiment. The QAEs in EQUIP are supposed to both approve the initial program design and monitor its outcomes; whether they are able to do so effectively will determine in large measure whether student dollars are well-spent.

An important part of an experimental program is that its evaluation provides insight into whether a program of its kind is effective at what it aims to achieve. However, there are questions as to whether the current evaluation design can accurately measure that. As my colleague Elizabeth Davidson has pointed out in a previous blog, it is likely that each QAE will establish unique goals for the program it monitors and determine its own methods for assessing whether they are met. Without consistent standards, it will be difficult to compare the efficacy of each partnership to other partnerships, as well as the efficacy of EQUIP as a whole.

The first year of the program is expected to be limited in its size and scope: it will include only 1,500 students and the cap on Pell Grant spending is $5 million, a tiny fraction of the billions allocated by the federal government annually. Hopefully, this can provide the Department sufficient data to determine whether this program should be scaled up or removed. Only time will tell.