What should an undergraduate chemistry major know by the time she graduates? How can one tell if she knows it? And how can chemistry instruction be improved to ensure that more students meet those expectations?

Such deceptively simple questions—for chemistry and every other discipline—have become an important focus of higher education leaders, accrediting agencies, and government. Yet many universities have struggled to develop robust processes for assessing student learning. Even when a central administration makes a serious effort to develop such a process, faculty participation is often pro forma.

The University of Pittsburgh is an exception. At Pitt, faculty across 350 programs are deeply engaged in a systematic approach to assessing student learning outcomes, which has led to measurable results and significant changes.

I spent two days in October meeting with administrators and faculty at Pitt to take a closer look at how Pitt has been able to engage its faculty in an ongoing process of student learning assessment and planning, when so many similar efforts on other campuses have not taken hold.

Making Assessment Work: Lessons from the University of Pittsburgh” delves into some of the specific practices Pitt undertook and documents the change in the university’s culture. No system is perfect, but this case study shows Pitt’s decentralized approach, targeted at the level of coherent programs of study, coupled with strong and supportive leadership, led Pitt’s faculty to make assessment an important driver of program improvement.

On a programming note, this is the first in a new series of case studies on educational transformation from Ithaka S+R.  Every few weeks, we will release a new report on innovative approaches that institutions have taken to improve student outcomes and control costs.  Covering issues such as online education, learning analytics, and university governance, the case studies document the ways that change happens in higher education.