Most analysts agree that the success or failure of the Common Core State Standards (CCSS) hinges on implementation. But the term has been ambiguous. Advocates of CCSS talk about aligned curriculum, instructional shifts, challenging assessments that test critical thinking, and rigorous accountability systems that produce an accurate appraisal of whether students are on track to be college- or career-ready by the time they graduate from high school. These descriptions are unsatisfying. Heavy with flattering adjectives, they echo the confidence proponents have that CCSS will improve several important aspects of schooling. But such confidence may be misplaced; for example, decades—if not centuries—of effort have been devoted to the perfection of instruction. Moreover, when CCSS’s advocates talk about implementation, it seems to mean every important activity in education outside of adopting standards. By meaning almost everything, it means nothing.
This Chalkboard post begins a series on implementation of the CCSS, with an examination of curriculum as an aspect of implementation. Future posts will look at instruction, assessment, and accountability. I start with a framework for thinking about implementation. This conceptual framework will guide the current analysis as well as future posts. I will mostly discuss CCSS’s mathematics standards, primarily because I know more about them than the ELA standards, but also because the skills and knowledge expressed in math standards have a clarity that ELA standards lack. That said, I will bring ELA standards—and standards in other subjects that CCSS does not yet encompass—into the discussion when appropriate. I will also draw on the public policy literature on implementation. The goal is to discuss the implementation of CCSS analytically.
A Framework for Thinking about Implementation
In the field of policy analysis, the classic text on implementation is Jeffrey Pressman and Aaron Wildavsky’s Implementation, published in 1973. The book’s 45-word subtitle—surely one of the longest for such an influential text—begins with the clause, “How Great Expectations in Washington Are Dashed in Oakland.” The book describes the saga of a federal redevelopment program in Oakland, California. The program’s designers started out with ample resources, broad political support, and the cooperation of all major federal, state, and local stakeholders, including powerful people in both government and the private sector. The path to successful implementation looked like a slam dunk. And yet the program failed.
What happened? The details of the program’s failure are not important here. But two big ideas that Pressman and Wildavsky highlight are generalizable to a lot of other policies, including the Common Core. Implementation involves step-by-step encounters with what Pressman and Wildavsky call “decision points,” a sequence of hurdles for the policy or program to clear. In the case of a program involving several layers of government, these decision points not only mean that the support of state and local officials must be held over time, but also that officials must make good decisions when exercising discretionary authority on the program’s behalf. Think of a child lining up several dozen dominoes, with the goal of pushing over the first domino in order to topple them all. If a single domino doesn’t do its job, the last domino will not fall. Every decision point in the implementation process exposes nascent programs to possible failure.
Policy makers are wildly optimistic about implementing new programs. Pressman and Wildavsky offer a mathematical insight into why this is so. Consider an implementation path in which the probability of negotiating any single decision point is quite high—say, 95 percent. A casual preview of implementation may lead one to conclude that since clearing points A, B, and C is easy, implementation will be easy. Such reasoning overlooks the reality that the probability of success shrinks as the number of decision points increases. With three decision points, the odds fall to about 86 percent (.95 x .95 x .95). It takes 14 decision points for the odds to drop below 50 percent. Then failure is more likely than success.
Implementing Common Core
A key assumption of Pressman and Wildavsky’s conceptual scheme is that implementation decision points are organized vertically, down through levels of government. There is also a certain amount of sequential dependence, as the domino analogy above implies. That may be true for a redevelopment program, but it’s not always true in education. I doubt that it’s true for Common Core. Education consists of loosely coupled organizational units (states, districts, schools, classes). Failure at one level may not be fatal to another. There can be good classes in bad schools, for example, good schools in bad districts, and so on. States or districts might bungle the CCSS, but savvy districts and schools could still rescue the standards and use them effectively.
Nevertheless, the vertical structure is useful for modeling how CCSS implementation will unfold. It is also useful for anticipating political opposition that the CCSS may encounter. Terry Moe has written extensively on the politics of “blocking.” When advocates of a particular education policy are victorious in the legislative arena, they have only won a battle, not a war. Opponents will show up again and again during implementation—in schools, or before school boards, or in other local forums—to continue the battle.
So let’s map the major points of vulnerability for the Common Core’s implementation. The project functions at the national, state, district, school, and classroom levels. At each of the five levels, decisions have been made or will be made regarding Common Core. The four crucial components of CCSS’s implementation—curriculum, instruction, assessment, and accountability—combine with the levels of decision making to create a minimum of twenty decision points. Imagine a 4 X 5 table with empty cells for the decision points. Future historians, by filling in the blank cells of the table, will tell the story of CCSS’s implementation.
Cells may comprise multiple decision points. In terms of curriculum, for example, twenty states have state textbook adoption, in which state boards and departments of education select the curricular materials that public schools may purchase. The other thirty states leave that decision up to districts, but typically provide funding for purchasing materials. Currently, states and districts are selecting math programs to reflect the CCSS, offering programs to train educators on how to use the new curricula, and purchasing new materials that are beginning to appear in schools and classrooms.
Note that the whole implementation process is bottom-heavy, leading ultimately to activities in the nation’s 98,817 public schools and in the classrooms within them. Historically, curriculum controversies reach their greatest intensity when curricular materials are introduced in classrooms. That is happening now with the Common Core. Common Core won the support of elites and cleared most upper-level decision points—all but a few states are on board with CCSS. Those high-level decisions are no longer the main events in CCSS’s implementation.
The emergence of social media as a tool for mobilizing political action has undoubtedly enhanced the power of actors at the lower-level decision points to sway implementation. Forty or fifty years ago, difficulties implementing a math program in a small rural district probably would not receive much notice. In the 1960s and 1970s, the failure of “new math” wasn’t apparent for several years, until surveys revealed teachers were not using the new curricula. During the last curriculum controversy in mathematics—the math wars of the 1990s—the internet was just beginning to be used for organizing people politically. Curriculum aligned with the 1989 standards of the National Council of Teachers of Mathematics was the source of the conflict. The website “Mathematically Correct” fostered a national network of opposition by tabulating local efforts to drive NCTM-oriented math programs out of the schools.
Today, a number of grass roots organizations have sprung up to fight against CCSS. Poorly designed math problems are widely circulated on Twitter and criticized by bloggers. I will discuss this phenomenon in greater depth in my June Chalkboard post, but suffice it for now to say that these attacks on Common Core, whether justified or not, illustrate the vulnerability of CCSS curriculum as implementation unfolds and the number of decision points multiply.
Lack of Evidence to Guide Curriculum Decisions
Shouldn’t we expect local educators to make good decisions when choosing curriculum that is compatible with the Common Core? As my colleagues Matt Chingos and Russ Whitehurst have documented, educators have very little evidence to go on when selecting curriculum. Evidence of effectiveness is in short supply. One of the rare randomized control trials of elementary math curricula was conducted by Mathematica. The study followed students through grades 1 and 2. Four math programs were evaluated, and although limiting the study to first and second grade curricula ensured that many common topics were covered, one of the programs produced very different results. Students in three of the programs (Math Expressions, Saxon, and Scott Foresman/Addison Wesley/envision) scored about the same, but all three outscored the fourth program (Investigations) by a statistically significant amount (effect size of about 0.22). A student at the 50th percentile who received instruction in Investigations in first and second grade would have scored at the 59th percentile if taught from one of the other programs.
What do educators go by if they can’t select on effectiveness? One popular approach is to go by alignment—how well math programs match up with the topics in the CCSS. This is a poor substitute for evidence of effectiveness. A well-aligned program means it covers the topics and objectives that CCSS lists for a particular grade level—it does not mean that the program covers them well. Some programs may cover Topic A well and students will learn because of that. Other programs may cover Topic A poorly and students will not learn. Both programs are aligned with Topic A.
Summary and Conclusion
Let’s conclude by returning to the question of defining implementation. What does the implementation of CCSS mean? I have drawn on and modified Pressman and Wildavsky’s implementation model to suggest a definition: the decisions that educators make—at national, state, district, school, and classroom levels—to realize the curriculum, instruction, assessment, and accountability systems of the Common Core. The CCSS implementation process will involve several decision points, with each one leaving the CCSS vulnerable to bad decisions by officials, who have scant evidence on which to act, and to the efforts of political opponents.