Academic Leadership: The Online Journal


Claire Procopio


There is an inherent tension in the U.S. system of accreditation. Historically, the system has been one of self-regulation (Brittingham, 2009). As access to higher education has grown, however, and the concomitant flow of federal money to colleges and universities has increased, the federal government and the taxpayers it represents have called for more and more external reporting of measures of college quality. Critics of the current system would like more external oversight to create what they have termed variously a “culture of quality” or a “culture of evidence” (Bardo, 2009; Crow, 2009; Kelderman, 2009; Understanding, 2001). The most dissatisfied would like to remove regional accrediting approval as the imprimatur that authorizes federal funds; those critics would delegate the power to authorize spending public funds to some branch of the federal government (Graca, 2009). Defenders of the current system point to the power of self-regulation to establish an ongoing culture of improvement in colleges and universities more effectively than external regulation can achieve (Kelderman, 2009; Oden, 2009). For the purposes of this study, it is important to note that both critics and defenders predicate their arguments for being the better path to achieving educational quality on the belief that it will take transformed organizational cultures in higher education to sustain any real overhaul of educational outcome attainment. This article considers both sides of the accreditation debate and uses Glaser, Zamanou, and Hacker’s (1987) Organizational Culture Survey (OCS) to create a unique data set to explore the question: to what extent does participating in regional accreditation affect perceptions of organizational culture for members of those cultures?