At a recent Capitol Hill forum, “Innovation in Higher Education: How Data and Evidence Can Improve Student Outcomes,” it was argued that improving higher education student outcomes is a matter of the three “Ts”: trying, testing, and tracking behavior to make sure that the approach or intervention is having a positive effect. As for the evidence that should be required to substantiate that, James Kvaal, president of The Institute of College Access and Success (TICAS) and co-author of the recent bipartisan report, “Moneyball for Higher Education: How Federal Leaders Can Use Data and Evidence to Improve Student Outcomes,” said that evidence of the effect has to go beyond intuition and logic. But, he added, it is not necessarily restricted to the gold standard of a randomized control study. The example provided of an effective evidence-based program using a quasi-experimental design was the much-publicized City University of New York (CUNY) Accelerated Study in Associate Programs (ASAP) community college initiative.
A proven community college success story
ASAP started in 2007 as a small program, with a cohort of a little more than 1,000 students, and has since grown to 11 cohorts totaling almost 34,000 students. Using 150 percent of “normal time” to completion, or three years, as the measure of associate degree attainment across five cohorts (from fall 2009 to fall 2012), ASAP students achieved a 52.4 percent attainment rate compared to 26.8 percent rate for the comparison group. At the Congressional briefing, Michael Weiss, senior associate at MDRC, which conducted the evaluation, elaborated that this one figure is not enough to judge the success of a program. While a completion rate of 52 percent may not per se seem high, the effectiveness of ASAP on improving student outcomes becomes evident when comparing it to the group of non-ASAP students. A near doubling of the completion rate is certainly evidence of its effectiveness, he said.
What works is not always new
Weiss said that applying or adapting existing approaches in different ways or settings also counts as innovation. Kvaal added that we must disabuse ourselves of the view that all innovation must be good, noting that new models or attempts to change outcomes must undergo empirical scrutiny. And the effectiveness of using data and evidence to improve student outcomes rests on three factors, Kvaal continued. It requires agreed upon definitions and measures of outcomes, which includes what data to collect and track, as well as metrics and adequate resources for implementation and evaluation. The latter entail both the amount of resources as well as how they are channeled. The Moneyball report recommended that the Higher Education Act have a one percent set aside or “seeding innovation fund” for evaluation of innovative programs. Scaling of ASAP and Bottom Line were given as examples by the panelists as programs where the funds could be directed. Bottom Line provides personal mentoring and counseling services to low-income students at four–year institutions located in several states. Their CEO served as the third panelist.
Key takeaways of how data and evidence can improve student outcomes
The panelists agreed that as good as any one of a myriad of services and programs is, such as counseling, financial assistance, mentoring, academic support, “nudging,” micro-grants, etc., it is usually never one thing alone that will bring about the most change. Again ASAP was offered as an example of a multi-faceted approach with proven results. Another takeaway is the use of a tiered-evidence approach, namely to start small, evaluate each phase, and only then disseminate results, replicate, and scale. The American Association of Community Colleges’ (AACC) Voluntary Framework of Accountability (VFA) is a tool for community colleges to use to collect and analyze data for developing student success strategies.