Sections

Research

Responsiveness and durability: An analysis of the Accountability and State Plans rule

Elizabeth Mann Levesque
Portrait: Elizabeth Mann Levesque
Elizabeth Mann Levesque Former Brookings Expert, Student Support & Classroom Climate Consultant - University of Michigan

October 31, 2018


This report analyzes the complex dynamics surrounding the federal rulemaking process, focusing on the relationship between agency responsiveness and rule durability. Using the Accountability and State Plans rule as a case study, author Elizabeth Mann Levesque explores the input that the department received from public comments and Congress, as well as how the department revised the draft rule in response to this feedback. The analysis within provides insight into how the public, agencies, and Congress shape the content and lifespan of final rules.


Table of Contents

Introduction
I. The rulemaking process
II. The Accountability and State Plans rule
III. Input from organizations and Congress
IV. How did the department respond to input from commenters and Congress?
V. Responsiveness and rule durability: Lessons from the Accountability rule
VI. Conclusion

A lone worker passes by the U.S. Capitol building in Washington

Introduction

Agencies in the federal bureaucracy shape public policy by issuing regulations, also known as rules. Final rules carry the weight of law (Garvey 2017). As a result, the rulemaking process affords agencies a great deal of influence over federal policy. The political appointees who head agencies are members of the president’s administration, tasked with steering policy in the direction of the administration’s goals (Lewis 2004). In this context, it should come as no surprise that agencies approach the rulemaking process strategically. For example, agencies attempt to pre-empt judicial review of the rule (Cheit 1990) and seek the optimal political conditions under which to finalize rules (Potter 2017). This research suggests that, through the rulemaking process, agencies work to minimize the chances that a final rule will subsequently be revised or repealed. In other words, it appears that agencies attempt to increase rule durability, where durability refers to the final rule remaining in effect without substantive revisions.

At the same time, agency behavior during the rulemaking process is constrained by procedural requirements. Many of these requirements were initially established in the Administrative Procedures Act of 1946 (APA), which was adopted to rein in the power of the federal bureaucracy (Kerwin and Furlong 2011). Additional requirements with respect to the rulemaking process have been added over time through executive orders, court decisions, and legislation (Kerwin and Furlong 2011; Garvey 2017; Carey 2013). Congressional committees may also hold oversight hearings during the rulemaking process in which they call in witnesses to discuss draft rules, including agency heads.

As a result of these procedural requirements and the prerogative of Congress to hold oversight hearings, agencies may receive input from a wide variety of stakeholders during the rulemaking process. This report focuses on the role of feedback received via congressional oversight hearings and the notice and comment process. Established by Section 553 of the APA, the notice and comment process requires agencies to solicit input from the public on draft rules (Kerwin and Furlong 2011). Agencies can decide whether or not to incorporate commenters’ feedback into the final rule, although they must respond to the issues that commenters raise and justify their decisions about the content of the final rule.

Agencies could thus choose not to incorporate into the final rule the feedback they receive from commenters and from members of Congress. Indeed, if feedback is at odds with the administration’s interpretation of the relevant law, the agency may have substantial motivation to defend the draft rule as written rather than to revise it. However, doing so may threaten the durability of the final rule. Public commenters may file a lawsuit disputing the agency’s decision to ignore their input, raising the risk of judicial review of the rule. In subsequent administrations, new political appointees may revise the rule to fix what they believe the agency got wrong the first time. Members of Congress who feel the agency did not adequately address their concerns raised during rulemaking could repeal the rule shortly after it is finalized using the Congressional Review Act (CRA). Even after the window to use the CRA closes, Congress could enact legislation that revises or overwrites a rule (Potter 2017).

In short, while attempting to create durable public policy through final rules, agencies must adhere to procedural requirements and anticipate the potential for oversight. How do agencies attempt to balance these priorities, how does the final rule reflect these efforts, and with what implications for rule durability? These questions speak to a broader inquiry about bureaucratic policymaking and oversight of this process: What is the nature of the relationship between agency responsiveness and rule durability?

The subsequent analysis offers an exploration of these important issues through an in-depth case study of the Accountability and State Plans rule, referred to hereafter as the Accountability rule. This rule offers an excellent opportunity to examine the relationship between agency responsiveness and rule durability. The Department of Education proposed this rule to implement core provisions of the Every Student Succeeds Act (ESSA), signed by President Obama in December 2015. The Accountability rule had the potential to be highly consequential for education policy, and the department’s proposed rule garnered a great deal of attention. The House and Senate committees on education held nine hearings in 2016 related to the department’s implementation of ESSA, some of which focused heavily on this rule, and the department received over 21,000 public comments on the draft Accountability rule.

This rulemaking process was marked by disagreements between Republican members of Congress and President Obama’s appointee, then-Secretary of Education John King. As the analysis below indicates, on several aspects of the draft rule, the department received substantial opposition from commenters and members of Congress alike. This context provides an opportunity to examine how the agency responded to this feedback.

Furthermore, the timing of this rulemaking process and the ultimate fate of the final rule makes it particularly useful for examining the relationship between responsiveness and the durability of final rules. The draft rule was published in spring 2016, and the public comment period and congressional hearings occurred before the 2016 presidential election. During this time, it was widely expected that the Democratic presidential candidate, Hillary Clinton, would beat the Republican nominee, Donald Trump, in the 2016 presidential election. Of course, Trump won the election in a historic upset. Just weeks later, on Nov. 29, the final Accountability rule was published. Within months, the rule was repealed via the CRA, one of only a handful of rules ever to be repealed in this manner (Lipton and Lee 2017).

The rule’s repeal was far from inevitable, raising the question: Why was it repealed? Further, and of particular interest here, what might its repeal tell us about the relationship between agency responsiveness to concerns raised during the rulemaking process and rule durability? While use of the CRA is rare, this analysis is nonetheless relevant more broadly. The analysis begins with a close look at how the Department of Education responded to public comments, a process designed to increase transparency and accountability in the rulemaking process (Kerwin and Furlong 2011), followed by a discussion of how the department responded to opinions expressed by congressional Republicans, the majority party before and after the election. The report then examines how Congress reacted to the department’s final rule, offering insight into the relationship between responsiveness and rule durability. The relationships at the core of this analysis, between the agency and the public and the agency and Congress, remain relevant for many rules and the rulemaking process more generally.

In this context, this report explores the following questions with respect to the Accountability rule: What input did the department receive from organizations and members of Congress, how did the department respond, and what does this rulemaking process suggest about the relationship between agency responsiveness and rule durability?

The rulemaking process

This section offers a streamlined introduction to the rulemaking process, highlighting the main steps. Many other steps often occur along the way; see Carey (2013) for a detailed discussion.

Following passage of new legislation, agencies often write new rules to clarify how a specific component of the law should be implemented. Often, this process begins with an agency drafting a rule. Before being published for public comment, rules deemed “significant” must receive approval from the Office of Information and Regulatory Affairs (OIRA) within the Office of Management and Budget (Carey 2013, p. 2). After receiving approval (if necessary), the agency publishes the draft rule, typically through a Notice of Proposed Rulemaking (NPRM) in the Federal Register (Carey 2013, p. 6). The public then has an opportunity to read the draft rule and submit written comments, typically for a period of 30 to 60 days (Carey 2013, p. 6). This is known as the comment period. As discussed above, members of Congress may also hold oversight hearings during the rulemaking process.

Notably, public comments and congressional oversight hearings are both mechanisms that provide for public accountability in the rulemaking process, wherein agencies (non-elected bodies of government) issue rules that profoundly affect public policy. By giving members of the public an opportunity to provide input on proposed rules and requiring agencies to respond to this feedback in writing, the comment period was designed to introduce more transparency and accountability into the rulemaking process (Kerwin and Furlong 2011). The comment process is one avenue through which agencies can receive information relevant to the proposed rule from experts, practitioners, and those whose work or lives would be affected by the rule (Kerwin and Furlong 2011, p. 169). For their part, congressional oversight hearings provide opportunities to hold agencies accountable for their actions (McCubbins and Schwartz 1984).

After the comment period closes, the agency drafts revisions to the rule. When revising draft regulations, the agency considers input received during the comment period, additional relevant evidence in the rulemaking record, the agency’s interpretation of the law, and the administration’s perspective on what policies would most effectively fulfill the law’s goals. Agencies are not required to include revisions recommended by commenters (Naughton et al. 2009, p. 260), although they must respond to each “significant” issue raised by commenters when they publish the final rule (Garvey 2017, p. 3). “Significant” rules require final approval from OIRA (Carey 2013, p. 2). Once approved by the required institutions, the final rule is published in the Federal Register.

However, failure to adequately justify the rule may leave the final rule vulnerable to revision or repeal. Rules may be revised through legal challenges that result in judicial review of the rule (Carey 2013, p. 16). In the case of a legal challenge, courts typically apply the “arbitrary and capricious” standard of review (Garvey 2017, p. 14). Public comments may be used as part of the administrative record for judicial review; commenters and agencies alike are aware of this potential. Elliot (1992) argues that the primary purpose of the comment period is to establish a record for subsequent judicial review, while Cheit (1990) argues that agencies’ written responses to commenters are designed primarily to “ward off judicial review” (p. 217).

In addition, the Congressional Review Act allows for the repeal of a rule within 60 days after Congress receives the final rule, excluding congressional recess or adjournment (Carey 2013, p. 16). The proposal to repeal a rule must be introduced by a member of Congress, approved by both chambers of Congress, and enacted into law (Carey 2013, p. 16). Once repealed under the CRA, a “substantially similar” rule cannot be issued (Carey 2013, p. 16). The CRA is used rarely. Only one rule was repealed under this authority between the CRA’s passage in 1996 and 2017. In 2017, the Republican-controlled Congress used the CRA to repeal 14 rules finalized in the last days of the Obama administration (Lipton and Lee 2017).

Figure 1, from a report by the Congressional Research Service (Carey 2013), summarizes the typical rulemaking process.

Figure 1: Summary of the rulemaking process

Fig 1 Summary of the rulemaking process

The Accountability and State Plans rule

The Accountability rule was issued under the Every Student Succeeds Act (ESSA). Passed in 2015, ESSA was a long-overdue replacement for the No Child Left Behind Act of 2001 (NCLB), which was initially scheduled for reauthorization in 2007. Eight years after this expiration date, policymakers, advocates, educators, parents, and students were more than ready for a change. By 2015, NCLB was widely recognized as deeply flawed by Republicans and Democrats alike. Specific complaints with the law included its prescriptions for how to measure school performance and how to intervene in schools that failed to make “adequate yearly progress,” measured in large part by student performance on standardized tests. More generally, the law was seen as overly prescriptive and punitive, restricting state flexibility and stifling innovation.

In contrast, ESSA allows states much more flexibility in setting statewide goals, measuring school performance, and developing strategies to intervene in schools that underperform. Democratic and Republican lawmakers celebrated the bipartisan law’s transition to an era when state policymakers would enjoy much more control over designing and implementing a process for holding schools accountable for student progress. When President Obama signed ESSA into law on Dec. 10, 2015, he famously called the law a “Christmas miracle,” summing up the relief felt on both sides of the aisle at finally passing a law to replace NCLB. The passage of ESSA thus signaled the beginning of a new era in accountability and was widely viewed as an opportunity for state and local education agencies (SEAs and LEAs, respectively) to revamp their accountability systems.

While providing increased flexibility to states, ESSA retains the basic accountability framework that NCLB put in place as well as federal oversight of state implementation of the law. For example, ESSA requires states to measure school performance based in part on student performance on standardized tests. ESSA also retains requirements to report performance for “sub-groups” of students, perhaps the one innovation in NCLB that many agreed was a step in the right direction. These sub-groups include economically disadvantaged students, students from each major racial and ethnic group, children with disabilities, English language learners, homeless students, students in foster care, and students with a parent who is a member of the Armed Forces.

ESSA also requires states to create and implement accountability plans, which are documents that detail how the state’s new accountability system follows the requirements laid out in ESSA; Education Week describes these plans as “accountability roadmap[s].” Under ESSA, each state’s accountability plan must be approved by the secretary of education (Public Law 114-95). These plans include details such as: long-term goals regarding graduation rates, academic achievement, and English language proficiency; how the state will measure school performance; and how the state will identify schools in need of improvement.

Shortly after the law’s passage, the Department of Education began the rulemaking process: writing regulations about how to implement the law. The department drafted the Accountability rule to specify the requirements states had to follow when designing their new accountability plans. The comment period for the Accountability rule began on May 31, 2016, when the Department of Education published the draft rule via NPRM in the Federal Register. The comment window was open for two months, through Aug. 1, 2016. According to the NPRM, the purpose of the proposed regulation was “to provide clarity and support” to state and local education agencies as they implemented the following three aspects of ESSA: accountability requirements, state and local education agency report cards, and requirements for consolidated state plans.

The NPRM invited members of the public to submit comments on “any issues related to these proposed regulations.” Furthermore, the department specified five issues on which it specifically sought feedback and “additional information” from commenters. The department encouraged commenters to identify the specific sections of the regulation that their comment addressed, arrange their comments on sections of the regulation in the same order as they appear in the proposed regulation, and provide “a detailed rationale for each response.”

Over the course of the next two months, thousands of commenters wrote to the department, providing comments that ranged from one sentence to many pages. The comments varied widely in format, specificity, and sophistication. Many of the comments appeared to be form letters or language distributed by one organization and submitted by thousands of individuals. Hundreds of organizations submitted comments, from local superintendents to national advocacy organizations and a wide variety of others.

In addition to input received via public comments, the Department of Education also heard from Congress. Specifically, the House and Senate held nine oversight hearings during 2016 with respect to the various rules proposed under ESSA. Several of these hearings focused in large part on the Accountability rule. Of note, during the rulemaking process for ESSA, both chambers were controlled by the Republican Party; this, in turn, means that both the Senate Health, Education, Labor and Pensions Committee and the House Committee on Education and the Workforce were chaired by Republicans. These oversight hearings, discussed in detail below, provided the department with information on how members of Congress viewed the agency’s approach to rulemaking as well as specific input on the Accountability rule. As I discuss below, whether and how agencies address concerns raised by members of Congress may have important implications for rule durability.

Following the comment period, the Department of Education reviewed all comments and additional relevant evidence and made revisions to the rule. The final rule was approved and subsequently published in the Federal Register on Nov. 29, 2016. Upon publication of the final rule, as required, the department also published its responses to each significant issue raised by commenters.

By this time, control of the presidency had switched from the Democrats to the Republicans with Donald Trump’s victory in the 2016 election. Reflecting this political climate, the same day the rule was published, news outlets raised questions about whether the rule would be repealed. As I discuss in more detail below, both chambers of Congress passed a resolution to repeal the rule using the authority of the Congressional Review Act. President Trump approved the resolution on March 27, 2017. With that, the Accountability and State Plans rule was discarded. As discussed above, the rule’s repeal offers an opportunity to examine not only department responsiveness to feedback it received during the comment period, but also the relationship between responsiveness and the durability of final rules.

Input from organizations and Congress

To explore agency responsiveness in the context of this rule, this section discusses input the Department of Education received from organizations via public comments and from members of Congress via oversight hearings. This analysis draws in large part on an original dataset of public comments that were coded to identify organizations and their feedback.

Public comments submitted by organizations

The department received approximately 21,000 comments on this rule between May 31 and Aug. 1, 2016. Many comments were part of mass comment campaigns. For example, just three types of form letters together account for roughly 8,000 comments. Visual inspection of the comments suggests that other form letters each account for dozens or even hundreds of comments. In addition, thousands of individuals submitted original comments. These individuals self-identified into a wide variety of categories, including those who identified simply as individuals (about 8,600), teachers (about 750), parent/relative (about 650), and others.

This analysis focuses on comments submitted by organizations, which constitutes a relatively small but valuable subset of public comments. The analysis is limited to these comments given the well-documented role that interest groups play in various aspects of the policymaking process (Baumgartner and Jones 1993; Kollman 1998; Hall and Deardorff 2006), including oversight of the bureaucracy (McCubbins and Schwartz 1984; Epstein and O’Halloran 1995; Balla and Wright 2001) and the rulemaking process in particular (Hall and Miler 2008).

Given the extent of input received from organizations during the comment process, this rule provides an excellent opportunity to examine how different types of organizations weighed in, where consensus developed, and how the department responded. In all, the research team identified 512 comments for analysis that met two criteria:

  1. Submitted by a leader of an organization, a representative acting on behalf of an organization, or a public official (including elected representatives).
  2. Addressed at least one specific section of the regulation.

The subsequent analysis is based on these 512 comments. For each comment, the research team identified the organization that submitted the comment, the state in which the organization is based, and the geographic scope of the organization (local, state, state affiliate, or national). Each organization is coded as one of the following types, listed below in alphabetical order with an example of each type:

  • Business (K12, Inc.)
  • Faith-based organization (Pennsylvania Catholic Conference)
  • Federal government, including members of Congress (National Endowment for the Arts)
  • Advocacy organization (National Council of La Raza)
  • Local education agency, including school districts, superintendents, assistant superintendents, local school boards, and local school board members (Cleveland Metropolitan School District)
  • Philanthropy (William T. Grant Foundation)
  • Professional association (Missouri School Boards Association)
  • Research center (Center for Civil Rights Remedies at UCLA’s Civil Rights Project)
  • State government, including state education agencies, governors, state legislators, and other state government organizations (New Jersey Department of Education)
  • Tribal government or representative (Gila River Indian Community)
  • Labor union (California Teachers Association)
  • Other

These categories are similar to those used in other analyses. For example, in their analysis of how interest groups participate in the rulemaking process, Furlong and Kerwin (2005, p. 358) include the following categories: business/company, government (includes federal, state, and municipal), labor unions, interest groups, trade associations, professional associations, research, government affairs, and other. For the purpose of this analysis, the government category is separated into federal government, state government, and local education agencies, which are the local units of government responsible for administering K-12 education. The professional association category includes professional associations and the few trade associations in these data (such as the Software & Information Industry Association), given their similar purposes of representing members of a profession or industry. Philanthropies, faith-based organizations, and tribal governments are identified as separate categories. Because “interest group” can be used as an umbrella term to refer to several types of organizations, I use the term “advocacy organization” to describe nonprofit organizations that engage in advocacy on particular issues or topics on behalf of their constituency (an example is the Leadership Conference on Civil and Human Rights). The research center category includes think tanks (such as the Center for American Progress and the Thomas B. Fordham Institute) as well as university centers and other research-oriented centers.

Figure 2 illustrates the share of comments submitted by each organization type. Notably, organizations in the same category do not all share the same preferences. Previous work (Golden 1998) has found that within one type of organization, commenters express different and opposing preferences, and the same is true of the data here.

Fig 2 Participation by organization type

Several patterns in participation are noteworthy. First, the largest share of comments comes from advocacy organizations. Within this category, there is a wide variety of nonprofit organizations that work to advance a specific agenda or goal with respect to education policy. For example, this category includes civil rights leaders like the Leadership Conference on Civil and Human Rights, parents’ groups like national and state Parent Teacher Associations, disability advocate groups such as the Consortium for Citizens with Disabilities, citizen groups like Easter Seals, organizations that advocate on behalf of policymakers like the National Governors Association, and many more. Unsurprisingly, both agreements and disagreements were evident across organizations within this category.

Prior research generally finds that input from nonprofit or “citizen groups” is dwarfed compared to input from regulated industries or business (Golden 1998; Yackee 2006a). Clearly, that is not the case here. This finding makes sense given that the regulated industry is public sector K-12 education. In this context, businesses are not primary stakeholders, and as a result, we would not expect theirs to be the loudest voices in the comment process.

We also see many comments from other types of organizations whom the rule would directly affect, including local education agencies, professional associations representing education professionals such as teachers and principals, and state government offices and officials. Notably, although only 13 percent of organization comments came from state governments, this figure may understate participation from state governments. Indeed, 82 percent of states (including D.C.) had at least one state official submit a comment. (This number increases to 88 percent when we include three state signatories on a comment submitted jointly by multiple organizations.)

Moreover, a relatively large share of comments (20 percent) are from local education agencies, the local units of government tasked with implementing many components of ESSA plans. Interestingly, 27 percent of comments from LEAs were created using a template supplied by the School Superintendents Association (AASA), a national professional association. Letters using the AASA template included a detailed set of feedback and recommendations with respect to the rule. Providing this type of template may be a savvy strategy, given evidence that agencies are more likely to be responsive to “sophisticated” comments (Cuéllar 2005) and recommendations that commenters “provide information directly relevant to analyzing the rule and its effects” (Looney 2018, p. 2). Furthermore, research suggests that agencies are more responsive to commenter input when a consensus emerges among commenters (Golden 1998; Yackee 2006a). The use of templates could help to generate a consensus among commenters, which in turn may help convince regulators to revise a rule consistent with that consensus.

Further, several organization types (research centers, labor unions, faith-based organizations, philanthropies, federal government, tribal organizations, businesses, and other) each contributed 5 percent or fewer of all organization comments. While the small share of comments from labor unions may be somewhat surprising to observers of education policy, it is worth noting that thousands of individual union members submitted a form letter disseminated by the National Education Association, and thousands of union members signed onto a comment from the American Federation of Teachers.

In addition, Figure 3 indicates the share of organization comments from each state, excluding comments from national organizations and comments with signatories representing more than one state. This map suggests that the Department of Education heard from local, state, and state affiliate organizations from across the country, receiving at least one comment from organizations based in 47 states and D.C.

Figure 3: Share of comments from local, state, and state affiliate organizations

Figure 3 Share of comments from local, state, and state affiliate organizations

Finally, the balance of input from national, state, and local organizations was relatively even. State and national organizations each accounted for about 38 percent of organization comments, while local organizations account for about 24 percent. Notably, 81 percent of comments from local organizations came from local education agencies. Thus, participation by local organizations primarily reflects input from LEA leaders, such as superintendents and school boards.

Input from organizations

Next, this section examines what type of feedback organizations provided. This analysis is based on an original dataset of these 512 comments. The research team coded each comment to identify, first, which sections of the regulation the commenter provided feedback on. Some commenters mentioned only one section of the regulation, while others mentioned dozens. Across the 512 comments submitted by organizations, we identified 4,527 mentions of specific sections within the regulation. These are not 4,527 unique regulation sections; for example, there are 250 mentions of section 200.18.b.4. As much as possible, for each mention, we identified the fourth paragraph level of the regulation section that the mention referred to (i.e. Section 200.18.b.4). It was not possible to identify all mentions with this level of specificity, as some of the comments provided feedback in more general terms, either at the second (i.e. Section 200.18) or third paragraph level (i.e. Section 200.18.b). Of the 4,527 mentions, we identified about 78 percent at the fourth paragraph level, 15 percent at the third paragraph level, and 7 percent at the second paragraph level. Commenters mentioned 378 sections of the regulation overall.

The research team coded each mention as recommending a major change, minor change, or expressing support. Major changes include requests to substantially revise the substantive requirements of a section, statements of opposition to the regulation section, and recommendations to delete a section entirely. Minor changes include requests for clarification, definitions, examples, guidance, or other small adjustments (including removal or addition of text) consistent with the substance of the proposed section. In sections coded as support, the commenter explicitly expressed support for a specific section of the regulation.

Analysis of commenter feedback

Using these data, we can analyze the type of feedback that organizations provided to the department via public comments. To begin, Figure 4 illustrates that, across organization types, commenters were more likely to recommend changes to the regulation rather than to express support. 44 percent of mentions were coded as major changes, 34 percent as minor changes, and 22 percent as support. However, it is not possible to say whether few organizations in fact supported the regulation or whether organizations primarily see the comment period as an opportunity to express critical feedback.

Fig 4 preference, by organization type

Further, Figure 5 illustrates the ten sections of the regulation that received the most “major change” mentions in these data. It appears that this feedback came from a variety of organizations, rather than from one constituency or set of interests. Although no single organization type accounts for the majority of major change mentions with respect to any of these ten sections, local education agencies accounted for a large share of critical feedback on several of these sections. State government officials, professional associations, labor unions, and advocacy organizations also consistently accounted for a notable share of major change mentions on these sections. In other words, the Department received requests to make major changes from a relatively wide variety of stakeholders across these sections.

Fig 5 10 sections with the most “major change” mentions

Next, to provide insight into the specific changes that organizations requested, Table 1 focuses on the five sections that received the most major change mentions. Column 1 lists the section number, and Column 2 indicates the share of “major change” mentions out of the total mentions for each section. For example, section 200.18.b.4 received the most major change mentions out of all sections discussed by commenters. Of the 250 total mentions on this section, 201 requested major changes, or 80 percent of the total mentions on that section. The majority of mentions on each of these sections requested major changes, with major change mentions ranging from 60 to 99 percent of the total mentions per section.

For these five sections, we coded each major change mention to identify the type of change each commenter proposed. We then identified the areas of consensus that developed on each section. Column 3 lists these consensuses and the share of commenters who agreed with each. For example, as discussed, 80 percent of mentions regarding section 200.18.b.4 requested a major change. Of these major change mentions, 62percent requested more state flexibility in deciding whether to use the summative rating while 38 percent opposed the summative rating altogether and/or wanted that section deleted. A common theme across each consensus is a request for more flexibility; none of these consensuses reflect a desire for stronger regulation from the Department.

Table 1: Consensus on 5 sections with most major change mentions

Section Percent of mentions that indicated major change, by section Major change consensus types
200.18.b.4

80%

(201/250)

Primary consensus: request for state flexibility to decide whether to use summative rating/state-designed options/additional options (62%)

 

Secondary consensus: total opposition to this section/request to delete (38%)

 

200.19.d.1

99%

(176/178)

Primary consensus: timeline moved back one year (78%)

 

Secondary consensus: timeline moved back, no specific date proposed (19%)

 

Other proposal to move timeline back (3%)

 

200.17.a.2

85%

(124/146)

Primary consensus: states should decide minimum n-size/reg should be silent on this issue (56%)

 

Secondary consensus: minimum n-size should be 10 (35%)

 

Minor consensus: minimum n-size should be lower than 30, not necessarily equal to 10 (8%)

 

Other opposition (1%)

 

200.15.b.2

60%

(116/194)

Primary consensus: opposition to options listed by Department for intervening in schools with participation rates below 95% (87%)

 

Secondary consensus: specific opposition to option IV due to “equally rigorous” state-designed option and/or opposition to Secretary approval of state plan (12%)

 

Other opposition (1%)

 

299.13.c.1

96%

(73/76)

Primary consensus: opposition to burden of payment falling on LEA for foster child transportation; wants to emphasize mutual obligation for LEA and child welfare agencies to collaborate with respect to the transportation of children in foster care (96%)

 

Other opposition (4%)

 

Table 1 indicates that for each of these sections, a clear consensus emerged. Some sections, like 299.13.c.1, enjoyed an almost unanimous consensus, in which almost all major change mentions advocated for the same type of revision (in that case, revising the regulation’s assignment of responsibility for the transportation of children in foster care). With respect to Section 200.19.d.1, commenters almost unanimously supported a later timeline; 78 percent of major change mentions requested moving the timeline back by one year, and another 19 percent requested a later timeline without specifying a date. Regarding section 200.15.b.2, 87 percent of major change mentions expressed opposition to the Department’s list of interventions for schools to choose from when standardized test participation rates dropped below 95 percent.

Sections 200.17.a.2 and 200.18.b.4 also received pushback, but there was slightly more variation among those opposed to the proposed rule. For 200.17.a.2, the majority of major change mentions, 56 percent, were in favor of allowing states to decide the minimum size for student sub-groups, while 35 percent advocated for an “n-size” of 10 students. For Section 200.18.b.4, 62 percent of major change mentions requested additional flexibility in determining whether or not to issue a summative rating for schools, and another 38 percent expressed complete opposition to the proposed section.

Each of these sections has a clearly identifiable primary consensus that received support from a majority of commenters who recommended major changes on that section (primary consensus for each section is noted in Table 1). A remaining question is whether support for each primary consensus came from one organization type. If so, that may suggest that the consensus was in fact driven by input from one type of constituency. For example, if support for a primary consensus came overwhelmingly from labor unions, this consensus would essentially reflect the labor union perspective rather than a perspective shared by different organization types with varied constituencies.

To investigate, Figure 6 illustrates the share of each primary consensus accounted for by labor unions, LEAs, state government, professional associations, advocacy organizations, and others. The “other” category includes organization types that contributed less than 10 percent of major change mentions on each of these sections. The denominator for each row is the number of major change mentions that supported the primary consensus for each section.

It does not appear that support for any of these primary consensuses came solely from one organization type. For example, with respect to Section 200.18.b.4, 125 mentions requested additional state flexibility regarding whether to use a summative rating for their schools. These 125 mentions came from LEAs (41 percent), state government (21 percent), professional associations (19 percent), advocacy organizations (10 percent), labor unions (2 percent), and others (4 percent). While LEAs, and to a lesser extent professional associations, account for a good deal of each consensus, each primary consensus received support from several types of organizations representing a variety of stakeholders.

Fig 6 Primary consensus

Input from Congress

Before assessing how the department responded to this feedback, I also consider the input the agency received from Congress. As mentioned above, Congress held nine hearings in 2016 that focused on the Department’s approach to rulemaking under ESSA. Some of these hearings focused specifically on the Accountability rule, while others focused more generally on the rulemaking process or on other proposed rules issued under ESSA. Table 2 summarizes these hearings.

Table 2: Congressional hearings on ESSA implementation during 2016

Date Committee Hearing title
2/10/2016 House, Subcommittee on Early Childhood, Elementary, and Secondary Education Next Steps for K-12 Education: Implementing the Promise to Restore State and Local Control
2/23/2016 Senate Committee on Health, Education, Pensions, and Labor (HELP) ESSA Implementation in States and School Districts: Perspectives from Education Leaders
2/25/2016 House, Committee on Education and the Workforce (CEW) Next Steps for K-12 Education: Upholding the Letter and Intent of the Every Student Succeeds Act
4/12/2016 Senate HELP ESSA Implementation in States and School Districts: Perspectives from the U.S. Secretary of Education
5/18/2016 Senate HELP ESSA Implementation: Perspectives from Education Stakeholders
6/23/2016 House CEW Next Steps in K-12 Education: Examining Recent Efforts to Implement the Every Student Succeeds Act
6/29/2016 Senate HELP Committee ESSA Implementation: Update from the U.S. Secretary of Education on Proposed Regulations
7/14/2016 Senate HELP Committee ESSA Implementation: Perspectives from Education Stakeholders on Proposed Regulations
9/21/2016 House, Subcommittee on Early Childhood, Elementary, and Secondary Education Supplanting the Law and Local Education Authority Through Regulatory Fiat

Here, I examine the feedback that members of Congress shared during these hearings. I focus in particular on viewpoints expressed by Republicans, given that Republican majorities in the House and Senate subsequently voted to repeal the finalized rule.

In the three hearings held in February, before the department published the draft Accountability rule, the Republican chairs of the House Subcommittee on Early Childhood, Elementary, and Secondary Education, Senate HELP Committee, and House Committee on Education and the Workforce made clear that they would be watching the Department of Education’s rulemaking process carefully. In his opening statement during the Feb. 10 hearing, Rep. Todd Rokita (R-Ind.) essentially warned the department that Congress would closely monitor the implementation of ESSA:

Over the last several years, the administration has routinely taken a top-down approach to education, imposing on States and school districts a backdoor agenda that has sparked bipartisan opposition and harmed education reform efforts. … Now, moving forward, it’s going to be our collective responsibility to hold the Department of Education accountable for how it implements the law. This is what Congress does and is supposed to do. Congress promised to restore State and local control over K-12 education, and now it’s our job to ensure that promise is kept.

Similarly, Sen. Lamar Alexander (R-Tenn.), one of ESSA’s primary architects and a former secretary of education himself, opened his Feb. 23 hearing on ESSA regulations by citing how the law constricted the department’s role and signaling his intent to pay close attention to the rulemaking process: “The Every Student Succeeds Act very clearly changed the way the Department of Education does business. It very clearly put States, school districts, principals, teachers, and parents back in charge. … But a law that is not properly implemented is not worth the paper it’s printed on. This year, a major priority of this committee will be to make sure that this bill is implemented the way Congress wrote it.”

Alexander’s counterpart in the House, Rep. John Kline (R-Minn.), the chair of the Committee on Education and the Workforce, echoed this sentiment during the Feb. 25 hearing:

Congress did not want to repeat the mistakes of the past, and we certainly did not want a Department of Education that would continue to substitute its will for the will of Congress and the American people. … That is why we are here today. We want to learn what actions the department intends to take to implement the law and help ensure the department acts in a manner that strictly adheres to the letter and intent of the law.

From the beginning of the rulemaking process, then, the perspective of Republican leadership in Congress was clear: They expected the department to hew closely to the letter of the law, and they intended to monitor the process closely.

For their part, Democratic leaders of the House and Senate committees, Rep. Bobby Scott (Va.) and Sen. Patty Murray (Wash.), respectively, discussed the importance of implementing the law faithfully as well as the important role of the federal government. Exemplifying this approach, in her opening statement during the Feb. 23 hearing, Sen. Murray stated: “I also expect the Department to use its full authority under the Every Student Succeeds Act to hold schools and States accountable for offering a quality education.” Her counterpart in the House, Rep. Scott, similarly emphasized the role of the federal government in his opening statement during a Feb. 25 hearing: “The Federal Government has an important role to play in setting high expectations both for systems and for the students those systems serve. We have to maintain vigorous oversight and enforcement to ensure that these expectations are met.”

Comparing the opening statements between the Republican and Democratic leadership in the House and Senate suggests a difference of opinion with respect to the department’s role in the rulemaking process, and more broadly, in oversight of ESSA implementation. While Republican leadership emphasized the roll-back of federal authority via ESSA, Democratic leadership emphasized the importance of federal oversight to hold states accountable. This ideological difference would recur throughout the hearings during the spring and summer of 2016 as Republican senators criticized then-Education Secretary John King’s approach to rulemaking.

With respect to the Accountability rule, members of Congress raised a number of concerns over the course of several hearings. For example, Democratic and Republican members of Congress, including Sens. Murray and Alexander, respectively, expressed concern about whether the implementation timeline proposed in the rule gave states sufficient time to develop new accountability plans. There was also some bipartisan opposition to the regulation’s requirement that LEAs would have to pay for the transportation of students in foster care in the event that local child welfare agencies and LEAs could not agree on a cost-sharing system. Both of these issues are among the top five sections of the regulation that received the most critical feedback from commenters.

In addition to these and other issues that arose over the course of the hearings, Republican members of Congress raised objections to the summative rating requirement (section 200.18.b.4), in which each school would be required to issue a single rating such as a letter grade to describe school performance. While Secretary King defended the summative rating requirement as an important mechanism to increase transparency, Sen. Alexander repeatedly expressed frustration with the department’s interpretation of the statute. In his opening statement during the June 29 hearing, Alexander told Secretary King, “You’ve invented out of whole cloth a summative rating system that is nowhere in the law.” This concern was reflected in public comments; indeed, as discussed above, this section received the most “major change” mentions among commenters in the data analyzed here.

More generally, Republican members of Congress objected to the Department’s approach to the rulemaking process, criticizing the department for proposing regulations that went beyond the letter and intent of the law. For example, Sen. Rand Paul (R-Ky.) asked a witness, “Do you think we are having some problems between the regulations actually coming forward and representing the true intent of what the bill said? And do you think that’s a problem?”

While critiques of the regulation were not limited to Republicans, Democratic leadership in Congress expressed support for the department’s approach. For example, Sen. Murray, the ranking member of the Senate HELP committee, praised Secretary King’s approach to the Accountability regulation: “Secretary King, I appreciate the work you’ve done here to prioritize the regulations focused on implementing the Federal guardrails in the law, and I’m very glad to see strong regulations coming out that make sure the law operates as it was intended and truly accomplishes the clear accountability goals we laid out. This is good news for students; I hope it continues.”

In sum, several of the sections on which organizations were the most critical via public comments were also sections on which members of Congress raised concerns during oversight hearings. Bipartisan concern arose with respect to several issues, such as the implementation timeline and responsibility for the transportation of students in foster care. Furthermore, Republicans, led by Sen. Alexander, repeatedly expressed their frustration with what they perceived as the department’s overreach via the rulemaking process. In their view, ESSA left little room for interpretation and gave the department a very small role to play through the regulatory process.

How did the Department of Education respond to input from commenters and Congress?

The notice and comment period closed on Aug. 1, 2016, and the last hearing in which members of Congress specifically addressed the Accountability rule was held on July 14. The department subsequently revised the draft rule and published the final rule on Nov. 29, 2016. This section begins with a brief discussion of the factors agencies frequently weigh when deciding how to revise a draft rule. Then, I discuss the extent to which the department’s revisions to the draft Accountability rule reflected input from organizations and members of Congress. Ultimately, this analysis suggests that the department was relatively responsive to critical feedback from commenters and members of Congress while also holding firm on several issues.

Agency considerations in revising draft regulations

As discussed above, agencies enjoy discretion over how to revise draft rules in light of public input. Several factors may influence whether an agency is more likely to make a revision in response to input from commenters. Research suggests that agencies are more likely to make revisions when there is a high degree of consensus among commenters (Golden 1998; Yackee 2006a). Additional research suggests that the volume of comments matters; in a sample of “everyday” rules, Yackee and McKay (2007) find that “the direction of change desired by the majority of commenters is generally realized in the final rule” (p. 345). However, agencies do not simply revise comments based on the will of the majority (Office of the Federal Register, p. 6), nor does a large volume of comments on one issue guarantee that the agency will adopt the requested change.

In addition, agencies may try to minimize the chances of subsequent revision to or repeal of final rules through various strategies, including addressing concerns raised by members of Congress as well as circumventing oversight. For example, research suggests that when congressional attention to a rule increases, the influence of interest groups declines (Yackee 2006), suggesting that agencies take congressional input into account. A different set of tactics includes strategic behavior by agencies to avoid publishing rules during unfavorable political climates. For example, Potter (2017) finds that agencies “fast-track” or “slow-roll” the publication of final rules, anticipating political oversight from Congress, the White House, or the courts and releasing rules at favorable moments.

The political appointees who lead agencies seek to create durable public policy, and rulemaking is a valuable avenue through which agencies can shape policy for years to come. Thus, they must take calculated risks with respect to the rulemaking process: Where are they willing to compromise, and how can they satisfy members of Congress who may be in a position to revise or repeal a rule in the near future without undermining their policy goals? In short, we should expect agencies to pursue their policy goals while working to minimize the chance of possible threats to the rule in the future. As the case of the Accountability rule suggests, this balance can be difficult to achieve.

Responsiveness to commenter and congressional input

In the context of the Accountability rule, the congressional oversight hearings suggested a fundamental difference between congressional Republicans and Secretary King in their perspective on the department’s role in issuing regulations for ESSA. On several sections, pushback from commenters echoed this criticism. This section analyzes how the department responded to this input, beginning with an analysis of how revisions to the rule reflected commenter feedback.

To identify the changes the department made to the draft rule, I relied on the agency’s description of the “major substantive changes” in the final regulations. The department published this list of changes along with the final rule in the Federal Register on Nov. 29, 2016. This list is not an exhaustive accounting of changes made to the rule, and changes to the regulation that the department omitted from this list are not included in the analysis below. Nonetheless, this list of major substantive changes as identified by the department allows for meaningful assessment of the agency’s revisions to the rule.

Each revision in the department’s list of major substantive changes was coded using the same protocol applied to the comments, using the categories major change and minor change to identify the types of revisions made by the department. If the department deleted a section or substantially revised the section, it was coded as a major change. Revisions to the rule were coded as minor if they provided clarifications, definitions, examples, or minor additions or subtractions consistent with the originally proposed requirement.

It would be impractical, from a coding standpoint, to identify whether the department’s revisions align with the specific recommendations proposed by commenters in each of the 4,527 mentions in these data. However, I offer several analyses below to provide a window into the nature of the department’s responsiveness to commenter input.

First, how frequently did the department make major changes to a section when at least half of the mentions for that section recommended a major change? To examine this question, I consider the sections where at least half of the mentions request a major change, including only those sections that received at least 10 mentions. Forty-six sections meet this description. Of these, the department made a major change on 15 (about 33 percent) and a minor change on 10 (about 22 percent). While this analysis does not indicate the extent to which the department’s revisions aligned with commenter input, it indicates that the Department revised the rule in a number of cases when there was vocal opposition to the draft proposal.

For the remaining 21 sections in this category (about 46 percent), the department either made no revision or did not make a revision large enough to warrant inclusion in its summary of substantive changes. Notably, the department received at least some comments in support of 16 of these sections; on average, about 24 percent of mentions on these 16 sections supported the original provision. It may be that commenters who expressed support helped the department make a case to retain the original proposal in the face of critical feedback from other commenters.

Not surprisingly, the department was very likely to retain sections of the proposed rule that received substantial support from organization commenters. Specifically, there were 13 sections that received at least ten mentions where at least half of the mentions expressed support. Of these, the department only made a minor change to one section. The other 12 sections were not revised, or at least any revisions were not substantial enough for the department to include in its summary of substantive changes.

Next, I revisit the five sections in Table 1 that received the most “major change” mentions. While examining how the department responded to specific input from commenters across all sections is impractical, focusing on these five sections provides insight into how the department responded to commenter consensus with respect to these high-profile issues.

Table 3: Revisions to sections that received the most major change mentions

Section Primary consensus Revision in final rule Alignment with consensus
200.18.b.4

Request for state flexibility to decide whether to use summative rating/state-designed options/additional options

 

Minor change: Clarification that states could provide additional information, like dashboards, while maintaining single summative rating requirement.

Final rule does not align with consensus. Although clarifies that states may provide “data dashboards” and other measures, final rule maintains the requirement to assign a single summative rating.

 

200.19.d.1 Timeline moved back one year

Major change: Timeline moved back one year.

 

Final rule aligns with consensus.
200.17.a.2 States should decide minimum n-size/rule should be silent on this issue No change.

Final rule does not align with consensus; final rule maintains minimum n-size of 30.

 

200.15.b.2

Opposition to options for intervening in schools with participation rates below 95%

 

 

Major change: Removes requirements that state-designed option is “equally rigorous” to three options Department suggests and that state-designed option results in “a similar outcome as other possible outcomes.”

 

Final rule reflects consensus but does not align completely. Provides increased flexibility with respect to state-designed option while retaining requirement to intervene as well as list of three intervention options.
299.13.c.1 Opposition to burden of payment falling on LEA for foster child transportation; wants to emphasize mutual obligation for LEA and child welfare agencies to collaborate with respect to foster child transportation Major change: Removes LEA burden of payment in the event of failure to reach an agreement. Emphasizes mutual obligation to collaborate. Final rule aligns with consensus.

The department’s response across these five sections varies. With respect to sections 299.13.c.1 and 200.19.d.1, the department’s revision aligns with the primary consensus expressed by commenters. Notably, Democrats as well as Republicans expressed concerned over these provisions during congressional hearings, suggesting that perhaps the department was particularly responsive to commenter consensus that was also echoed in bipartisan critiques. With respect to section 200.15.b.2, the department revised the rule to grant states more flexibility in deciding how to incorporate standardized test participation rates into their accountability systems. This revision addressed a dimension of the core complaint expressed in the primary consensus, but the department did not remove the list of options for how states could incorporate the participation rate into accountability systems, a change the consensus supported. While this revision reflected commenter consensus and responsiveness to critiques of overreach, in comparison to sections 299.13.c.1 and 200.19.d.1, it did not align as closely with the requested change. Overall, however, the department’s revisions to these three sections indicate responsiveness to widely shared concerns.

In contrast, the department was less responsive with respect to sections 200.17.a.2 and 200.18.b.4. With respect to section 200.17.a.2, the primary consensus favored allowing states to decide the minimum number of students required for sub-group reporting. However, the department maintained the minimum “n-size” of 30. The department also retained the summative rating requirement in section 200.18.b.4. In addition to commenter consensus requesting additional flexibility on this issue, recall that Sen. Alexander strongly objected to this requirement. While the department did revise this section to clarify that the summative rating requirement did not prohibit states from using data dashboards, the department retained the requirement to assign schools a single rating.

In retaining this controversial section, the department contravened the consensus among commenters and congressional Republicans. This commitment to the original proposal suggests that the department’s decisions on whether and how to revise the rule were informed by its interpretation of the law and, specifically, the department’s role in implementing ESSA and maintaining federal oversight. When this interpretation did not align with commenter consensus, as measured here, or the perspective of congressional Republicans, the department did not necessarily revise the rule in compliance.

In addition, although the majority of commenters who discussed section 200.18.b.4 expressed opposition to the summative rating, the department did hear from organizations who supported the requirement. Of the 250 mentions on this section, 43 (about 17 percent) expressed support for the summative rating requirement. For example, the Leadership Conference on Civil and Human Rights, a national coalition of civil rights organizations, submitted a comment co-signed by an additional 31 organizations that expressed unequivocal support for the summative rating requirement. Support from commenters endorsing the summative rating requirement may have been an important element of the department’s decision to retain this section.

With respect to the broader political environment, between the time the comment period closed on Aug. 1 and Trump’s election on Nov. 8, the department had good reason to believe that maintaining its position on controversial provisions of the regulation would not threaten the durability of the final rule. Hillary Clinton was widely expected to win the presidency, and with a Democrat in the White House, the rule would almost certainly be safe from repeal via the CRA and personnel changes at the department.

With Trump’s unexpected victory, of course, the political environment changed overnight, as Republicans gained control of the House, Senate, and presidency. However, even in this environment, the department may have concluded that its willingness to compromise on several high-profile issues would sufficiently satisfy detractors, even as the department retained several provisions consistent with its interpretation of ESSA and the department’s federal oversight role but inconsistent with consensus among organizations who submitted public comments and congressional Republicans.

Would the rule have escaped repeal via the CRA if the department had adopted more changes aligned with organizations’ input and, perhaps more importantly, congressional Republicans’ preferences? We will never know the answer, of course. Making additional concessions to congressional Republicans (and commenters who agreed with them) may have insulated the law from repeal, but doing so may have led to a Pyhrric victory from the department’s perspective: creating a rule that, while durable, omitted central elements of federal oversight that the department viewed as indispensable to implementation of the law.

Responsiveness and Rule Durability: Lessons from the Accountability rule

The final Accountability and State Plans rule was published on Nov. 29, 2016. In the normal course of events, the rule would have governed how states implemented central components of the Every Student Succeeds Act. However, as discussed above, this rule became one of a handful of rules to be repealed using the Congressional Review Act. Indeed, the day the final rule was published, Education Week reported that “with President-elect Donald Trump set to take office in January, the regulations face an uncertain future.”

Ultimately, the House voted to repeal the regulation on Feb. 7, 2017, the Senate followed suit on March 9, and President Trump signed the repeal on Marcy 27. Notably, some advocates on the right and left alike urged Congress to retain the rule, expressing concern that by repealing it under the CRA, which would preclude the department from issuing a similar regulation, Congress would eliminate an opportunity to clarify the relevant ESSA requirements for state and local education agencies. This argument did not sway Republican members of Congress; only one Republican representative and one senator voted against repeal. Sen. Alexander introduced the legislation for repeal in the Senate, reiterating his concerns expressed during oversight hearings: “Here is the problem with this rule that was put out by the U.S. Department of Education: the rule specifically does things or requires states to do things that Congress said in our law fixing No Child Left Behind that the department can’t do.”

This analysis of the Accountability rule offers several insights into the relationship between agency responsiveness and durability. To begin, the department was responsive to feedback it received from organizations via public comments. In the case of several high-profile sections where the department received substantial criticism from a variety of different organizations, the department revised the rule consistent with commenter consensus. This analysis suggests that public comments can play an important role in providing agencies with relevant information and persuading them to modify proposed rules. Thus, it appears that agency responsiveness is at least to some degree a function of procedural requirements initially established in the APA.

However, the department also at times held firm even in light of substantial critical feedback from organization commenters. This case suggests that commenter consensus does not always succeed in convincing an agency to revise a draft rule, even if this consensus is supported by a variety of organizations representing different constituencies. When agencies disagree with commenters (and members of Congress) about how to interpret the relevant federal law, agencies may be less willing to revise the regulation to accommodate commenter consensus. This case thus underscores agencies’ willingness and ability to exercise discretion during the rulemaking process.

At the same time, the repeal of the Accountability rule reflects the power of congressional oversight of the bureaucracy. Republican members of Congress—particularly Sen. Alexander, a leader on education policy—repeatedly expressed their objections to the proposed rule. The department, in turn, did indeed make several major revisions. However, the department remained committed to its interpretation of ESSA. Sen. Alexander saw this approach as federal overreach and repetition of NCLB-era mistakes, and ultimately, the majority party in Congress prevailed over the department.

Conclusion

This report began with the observation that agencies, and particularly the political appointees that run them, are motivated to influence public policy. Rule durability is thus an important consideration when agencies decide how to respond to input received from public comments and members of Congress during the rulemaking process. While the CRA is not always a relevant political tool, given that it can only be used to repeal a rule within 60 days of Congress receiving the final rule (Carey 2013), its use in this case underscores the significance of the political environment for rule durability. To create a durable Accountability rule, the Department of Education may have had to eliminate what it considered key provisions of the regulation central to the department’s oversight role under ESSA. The broader implication is that credible threats to final rules, either via judicial review or congressional action, may force agencies to choose between substantive revisions to the final rule or the risk of subsequent repeal.

Ultimately, this analysis of the Accountability rule suggests that we might reasonably expect agencies to revise rules based on the input that agencies receive via comments from organizations and congressional testimony (among other types of input), an agency’s interpretation of the federal law, and the agency’s beliefs about whether making revisions (or refusing to do so) will extend (or shorten) the rule’s lifespan. The precise manner in which agencies balance these considerations, and the conditions under which they prioritize one over another, are important questions for future study. Under what conditions is an agency willing to make compromises, such as making revisions to the draft rule, to ensure that the final rule faces minimal opposition? Are agencies more likely to revise the draft rule when bipartisan critiques arise? Faced with credible threats to rule durability from Congress, how do agencies decide whether to revise the rule or pursue an alternative strategy such as manipulating the timing of the rulemaking process (Potter 2017)? The analysis here points to the importance of understanding these and related questions about when and how the relationship between agency responsiveness and rule durability shapes the public policies articulated in final rules.


Acknowledgements

This project would not have been possible without Diana Quintero, who patiently devoted many hours to data coding and analysis and who provided wise counsel every step of the way. I am grateful to Elizabeth Martin for her tireless work coding these comments, to Nora Gordon, Rachel Potter, Molly Reynolds, Jennifer Selin, and Phil Wallach for their insightful feedback, and to Anne Hyslop for sharing her expertise. Thank you to Hana Dai, Caitlin Dermody, Yuhe Gu, Bethany Kirkpatrick, James McIntrye, Sarah Novicoff, Max Rombado, Kim Truong, and Helen Zhang for their excellent research assistance.

Thank you to the Spencer Foundation for their generous support of this research.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Appendix

Coding protocol: Identifying organizations among the comments

To identify organizations among the roughly 21,000 comments, we excluded all comments that did not include a self-reported organization name in the “Organization Name” field of the comment form or where the submitter self-identified as one of the following in the “Category” field on the comment form: other, individual, parent/relative, principal, public or private school, or teacher. We also did not include comments that appeared to be part of mass comment campaigns.

This sorting process resulted in about 1,000 comments that were potentially submitted by organizations. The self-reported information in the “Organization Name” and “Category” fields was often unreliable. As a result, the research team reviewed each comment and assessed whether the comment was submitted by an organization. To qualify as being submitted by an organization, the comment must have been submitted on letterhead that clearly indicated the comment represents the viewpoint of an organization rather than an individual, submitted by a representative clearly acting on behalf of an organization, or submitted by someone in a leadership role at the organization, such as a director or CEO. Comments did not qualify as an organization if they were submitted by staff in a non-leadership position (such as someone in charge of a particular office) unless the comment clearly represented the organization, not the viewpoint of an individual employee. To verify whether a comment represented an organization, we examined the comment and the self-reported information of the submitter, and when necessary, we conducted internet searches of the submitter and/or the organization.

To identify the state of a commenter, we relied on the state as provided by the commenter in the “state or province” field of the comment form. If the commenter did not provide this information, we determined the state from the comment text and, if necessary, an internet search of the organization that submitted the comment.

We then categorized each comment into one of the following organization types:

  • Business
  • Faith-based organization
  • Federal government (includes members of Congress)
  • Advocacy organization
  • Local education agency (school districts, superintendents, assistant superintendents, local school boards, and local school board members)
  • Philanthropy
  • Professional association
  • Research center
  • State government (includes state education agencies, governors, legislators, and other state government organizations)
  • Tribal government or representative
  • Labor union
  • Other

As discussed in the text, if a comment was submitted by multiple organizations, we coded the organization type based on the organization identified in the “Organization Name” category or the organization type listed in “Category.” If this information was not provided, we classified the organization type based on the signatories.

With respect to local education agencies (LEAs), we included comments submitted by assistant superintendents as well as superintendents, as they represent local education agencies. However, we did not include comments submitted by staff members within LEAs or representatives of individual schools, including principals and school-level administrators. In the context of state education agencies, we used a similar logic.

This coding strategy was developed to systematically and objectively include comments submitted by organizations, and to cast a wide net to minimize the chances of omitting comments that were submitted by organizations. However, given the inconsistency in how commenters self-identified, it is almost certainly the case that some comments that meet our criteria as an organization were unintentionally excluded. Despite these omissions, we are confident that this systematic approach produced a reliable dataset of the comments submitted by organizations with respect to this rule.

Coding sections: Major change, minor change, support

Because of personnel changes over the course of this project, the same team of coders did not code all of the comments. First, two independent coders coded comments in which the commenter filled in the field “Organization Name” on the comment form. The coders helped develop the coding protocol and received training on how to apply the coding protocol. They met regularly with the principal investigator to ensure consistency in coding. The overlap between coders was 25 percent of the original 570 comments where submitters wrote in an organization name, and the agreement rates between coders indicate that coders applied these categories consistently. The agreement rate between the two coders was 76 percent (kappa score = 0.64). This agreement rate includes instances of disagreement when both coders identified a “mention” but entered different codes. It also includes instances of disagreement when only one of the two coders identified a “mention” for coding (thus one coder had no code and one coder had support/major changes/minor changes).

Thus, this measure of interrater agreement is conservative, as it includes disagreements that resulted when one coder identified the comment as mentioning a particular section of the regulation and another did not. These disagreements do not reflect differences in application of the major, minor, or support labels, but rather, the difficulty in identifying whether a commenter was mentioning a specific section of the regulation. To better understand the interrater agreement rate with respect to applying the major, minor, and support codes, I calculate a kappa score that omits instances where the disagreement resulted because one coder identified a “mention” and another did not. Including only sections where both coders identified a “mention” and assigned a code, the agreement rate was 86 percent (kappa score = 0.77).

The agreement rate between coders is very similar in the sample that only includes the 327 comments we identified as organizations from the original 570 comments where submitters self-identified as organizations in the “Organization Name” field (75 percent with a kappa score = 0.62 for all mentions; 85 percent with a kappa score = 0.75 for sections where both coders identified a mention).

Although these rates of agreement were sufficiently high, the principal investigator and one of the original coders reviewed each mention to improve upon the consistency of the coding. After the initial round of coding, one of the original coders and the principal investigator were randomly assigned sections (i.e. Section 200.18.b.4) and reviewed each of the relevant mentions to ensure that the section numbers were identified accurately and that the support/major/minor categories were applied consistently.

After the initial round of coding, one of the original coders left the organization. Subsequently, it became clear that a number of comments from organizations had a blank “Organization Name” but indicated they represented an organization in the “Category” field. We thus added to the dataset any comments where the commenter identified an organization type within the “Category” field using the criteria for identifying an organization described above. The principal investigator coded these comments using the same coding criteria as discussed above, consulting one of the original coders when a comment was unclear.

References

“Approved ESSA Plans: Explainer and Key Takeaways From Each State.” Education Week, April 21, 2017. https://www.edweek.org/ew/section/multimedia/key-takeaways-state-essa-plans.html.

Balla, Steven J., and John R. Wright. “Interest Groups, Advisory Committees, and Congressional Control of the Bureaucracy.” American Journal of Political Science (2001): 799-812.

Balla, Steven, Alexander R. Beck, William C. Cubbison, and Aryamala Prasad, “Where’s the Spam? Mass Comment Campaigns in Agency Rulemaking.” Working Paper, Regulatory Studies Center, George Washington University, April 2018.

Baumgartner, Frank R., and Bryan D. Jones. Agendas and Instability in American Politics. University of Chicago Press, 1993.

Bolton, Alexander, Rachel Augustine Potter, and Sharece Thrower. “Organizational capacity, regulatory review, and the limits of political control.” The Journal of Law, Economics, and Organization 32, no. 2 (2015): 242-271.

Carey, Maeve P. “The Federal Rulemaking Process: An Overview.” Congressional Research Service, 2013.

Cheit, Ross E. Setting Safety Standards: Regulation in the Public and Private Sectors. N.p.: University of California Press, 1990.

Carey, Maeve P., Alissa M. Dolan, and Christopher M. Davis. “The Congressional Review Act: Frequently Asked Questions.” Congressional Research Service, 2016.

Cuéllar, Mariano-Florentino. “Rethinking Regulatory Democracy.” Administrative Law Review 57, no. 2 (2005): 411-500.

“Democrats and Republicans Agree: It’s Time To Rewrite No Child Left Behind.” HuffPost, January 13, 2015. https://www.huffingtonpost.com/2015/01/13/rewrite-no-child-left-behind_n_6464338.html.

Dudley, Susan E. “Election Could Wake the Sleeping Congressional Review Act Giant.” Forbes Magazine. Last modified November 14, 2016. https://www.forbes.com/sites/susandudley/2016/11/14/election-could-wake-the-sleeping-cra-giant/#21a221fb6303.

Dudley, Susan E. “Opportunities for Stakeholder Participation in US Regulation.” Regulatory Studies Center. Last modified September 23, 2014. https://regulatorystudies.columbian.gwu.edu/opportunities-stakeholder-participation-us-regulation.

Editorial Projects in Education Research Center. “Adequate Yearly Progress.” Editorial. Education Week, July 18, 2004. https://www.edweek.org/ew/issues/adequate-yearly-progress/index.html?r=580034332.

Elliot, E. Donald. “Reinventing Rulemaking.” Duke Law Journal 41 (1992): 1490-96.

Epstein, David, and Sharyn O’Halloran. “A theory of strategic oversight: Congress, lobbyists, and the bureaucracy.” JL Econ. & Org. 11 (1995): 227.

Elementary and Secondary Education Act of 1965, as Amended by the Every Student Succeeds Act-Accountability and State Plans, § 200.34(e)(2) (2016).

Every Student Succeeds Act, Public Law 114-95, U.S. Statutes at Large 129 (2015): 1802-2192.

Executive Office of the President. “Regulations and the Rulemaking Process Frequently Asked Questions.” Office of Information and Regulatory Affairs. https://www.federalregister.gov/documents/2016/05/31/2016-12451/elementary-and-secondary-education-act-of-1965-as-amended-by-the-every-student-succeeds.

Furlong, Scott R. “Exploring Interest Group Participation in Executive Policymaking.” In Herrnson, P. S., Shaiko, R. G., & Wilcox, C. (Eds.) The Interest Group Connection: Electioneering, Lobbying, and Policymaking in Washington (pp. 282-297). Washington D.C.: Chatham House Pub, 2005.

Furlong, Scott R., and Cornelius M. Kerwin. “Interest Group Participation in Rule Making: A Decade of Change.” Journal of Public Administration Research and Theory 15, no. 3 (July 2005): 353-70. http://www.jstor.org/stable/3525667.

Garvey, Todd. “A Brief Overview of Rulemaking and Judicial Review.” Congressional Research Service, 2017.

Golden, Marissa Martino. “Interest Groups in the Rule-Making Process: Who Participates? Whose Voices Get Heard?” Journal of Public Administration Research and Theory 8, no. 2 (April 1998): 245-70. http://www.jstor.org/stable/1181558.

Hall, Richard L., and Alan V. Deardorff. “Lobbying as Legislative Subsidy.” American Political Science Review 100, no. 1 (2006): 69-84.

Hall, Richard L., and Kristina C. Miler. “What Happens after the Alarm? Interest Group Subsidies to Legislative Overseers.” The Journal of Politics 70, no. 4 (2008): 990-1005.

“H.J. Res. 57.” Govtrack.us. https://www.govtrack.us/congress/votes/115-2017/h84.

Iasevoli, Brenda. “Trump Signs Bill Scrapping Teacher-Prep Rules.” Education Week, March 28, 2017. http://blogs.edweek.org/edweek/teacherbeat/2017/03/trump_signs_bill_scrapping_tea.html.

Kerwin, Cornelius M., and Scott R. Furlong. Rulemaking: How Government Agencies Write Law and Make Policy. 4th ed. N.p.: CQ Press, 2011.

Klein, Alyson. “ESSA Timeline, School Ratings, Standards Issues Dominate Senate Hearing.” Education Week. Last modified July 14, 2016. http://blogs.edweek.org/edweek/campaign-k-12/2016/07/essa_timeline_school_ratings_s.html?qs=essa+hearing.

Klein, Alyson. “Trump Education Dept. Releases New ESSA Guidelines.” Education Week. Last modified March 13, 2017. http://blogs.edweek.org/edweek/campaign-k-12/2017/03/trump_education_dept_releases_new_essa_guidelines.html.

Kollman, Ken. Outside Lobbying: Public Opinion and Interest Group Strategies. Princeton University Press, 1998.

Lewis, David. Presidents and the Politics of Agency Design: Political Insulation in the United States Government Bureaucracy, 1946-1997. Stanford University Press, 2004.

Lipton, Eric, and Jasmine C. Lee. “Which Obama-Era Rules Are Being Reversed in the Trump Era.” New York Times. Last modified May 18, 2017. https://www.nytimes.com/interactive/2017/05/01/us/politics/trump-obama-regulations-reversed.html.

Looney, Adam. “How to Effectively Comment on Regulations.” Center on Regulation and Markets at the Brookings Institution, August 2018.

McCubbins, Mathew D., and Thomas Schwartz. “Congressional Oversight Overlooked: Police Patrols Versus Fire Alarms.” American Journal of Political Science (1984): 165-179.

McKay, Amy, and Susan Webb Yackee. “Interest Group Competition on Federal Agency Rules.” American Politics Research 35, no. 3 (May 2007): 336-57.

Mendelson, Nina A. “Rulemaking, Democracy, and Torrents of E-mail.” Geo. Wash. L. Rev. 79 (2011): 1343.

Naughton, Keith, Celeste Schmid, Susan Webb Yackee, and Xueyong Zhan. “Understanding Commenter Influence during Agency Rule Development.” Journal of Policy Analysis and Management 28, no. 2 (Spring 2009): 258-77. http://www.jstor.org/stable/29739013.

“Number of Public Elementary and Secondary Education Agencies, by Type of Agency and State or Jurisdiction: 2014-15 and 2015-16.” Chart. In National Center for Education Statistics.

Obama, Barack. “Remarks by the President at Every Student Succeeds Act Signing Ceremony.” The Obama White House Archives. Last modified December 10, 2015. https://obamawhitehouse.archives.gov/the-press-office/2015/12/10/remarks-president-every-student-succeeds-act-signing-ceremony.

Office of the Federal Register. A Guide to the Rulemaking Process. https://www.federalregister.gov/uploads/2011/01/the_rulemaking_process.pdf.

Potter, Rachel Augustine. “Slow-Rolling, Fast-Tracking, and the Pace of Bureaucratic Decisions in Rulemaking.” The Journal of Politics 79, no. 3 (2017): 841-855.

“Proposed Rule: Elementary and Secondary Education Act of 1965, as Amended by the Every Student Succeeds Act: Accountability and State Plans.” Last modified May 31, 2016. https://www.regulations.gov/document?D=ED-2016-OESE-0032-0001.

Ujifusa, Andrew. “Education Secretary Spars with Senators over School Ratings, ESSA Timeline.” Education Week. Last modified June 29, 2016. http://blogs.edweek.org/edweek/campaign-k-12/2016/06/senate_essa_oversight_hearing_school_ratings_timeline.html?qs=essa+hearing.

U.S. Congress. House of Representatives. Committee on Education and the Workforce. Next Steps for K-12 Education: Implementing the Promise to Restore State and Local Control. 114 th Cong., 2nd sess., February 10, 2016.

U.S. Congress. House of Representatives. Committee on Education and the Workforce. Next Steps for K-12 Education: Upholding the Letter and Intent of the Every Student Succeeds Act. 114th Cong., 2nd sess., February 25, 2016.

U.S Congress. Senate. Committee on Health, Education, Labor, and Pensions. ESSA Implementation in States and School Districts: Perspectives from Education Leaders. 114th Cong., 2nd sess., February 23, 2016.

U.S. Congress. Senate. Committee on Health, Education, Labor, and Pensions. ESSA Implementation: Update from the U.S. Secretary of Education on Proposed Regulations. 114th Cong., 2nd sess., June 29, 2016.

U.S. Congress. Senate. Committee on Health, Education, Labor and Pension. ESSA Implementation: Perspectives from Education Stakeholders on Proposed Regulations. 114th Cong., 2nd sess., July 14, 2016.

U.S. Department of Education. “Final Regulations. Elementary and Secondary Education Act of 1965, As Amended By the Every Student Succeeds Act–Accountability and State Plans.” RIN 1810-AB27, 2016.

U.S. Department of Education. “ESSA Consolidated State Plans.” https://www2.ed.gov/admins/lead/account/stateplan17/index.html. Accessed October 17, 2018.

Wagner, Wendy, Katherine Barnes, and Lisa Peters. “Rulemaking in the Shade: An Empirical Study of EPA’s Air Toxic Emission Standards.” Administrative Law Review (2011): 99-158.

West, William F. “Formal Procedures, Informal Processes, Accountability, and Responsiveness in Bureaucratic Policy Making: An Institutional Policy Analysis.” Public Administration Review 64, no. 1 (January/February 2004): 66-80. http://www.jstor.org/stable/3542627.

West, William F. “Inside the Black Box: The Development of Proposed Rules and the Limits of Procedural Controls.” Administration & Society 41, no. 5 (September 2009): 576-99.

Yackee, Jason Webb, and Susan Webb Yackee. “Administrative Procedures and Bureaucratic Performance: Is Federal Rulemaking ‘Ossified’?” Journal of Public Administration Research and Theory 20, no. 2 (April 2010): 261-82. http://www.jstor.org/stable/40732511.

Yackee, Susan Webb. “Assessing Inter-Institutional Attention to and Influence on Government Regulations.” British Journal of Political Science 36, no. 4 (2006): 723-44. http://doi:10.1017/S000712340600038X.

Yackee, Susan Webb. “The Politics of Ex Parte Lobbying: Pre-Proposal Agenda Building and Blocking during Agency Rulemaking.” Journal of Public Administration Research and Theory 22, no. 2 (April 2012): 373-93. http://www.jstor.org/stable/23250886.

Yackee, Susan Webb. “Sweet-talking the Fourth Branch: The Influence of Interest Group Comments on Federal Agency Rulemaking.” Journal of Public Administration Research and Theory 16, no. 1 (January 2006): 103-24.

Author

  • Footnotes
    1. I use the term “agencies” to refer to agencies and departments in the executive branch, and I use the terms rules and regulations interchangeably.
    2. Agencies also receive feedback from other avenues during the rulemaking process. For example, under Executive Order 12866, interested parties may request a meeting with the Office of Information and Regulatory Affairs to discuss regulatory actions (https://www.reginfo.gov/public/jsp/Utilities/faq.jsp). This report focuses on input received via public comments and during congressional oversight hearings in the context of the Accountability rule.
    3. Agencies may also write new rules in response to another “initiating event,” such as a recommendation from an outside body or a catastrophic event (Carey 2013, p. 2).
    4. While analysis of the pre-NPRM stage of the rulemaking process is outside the scope of this report, it is worth noting research that suggests that input received prior to publication of the draft rule is consequential for the content of draft and final rules (West 2004; West 2009; Yackee 2012; Naughton et al. 2009; Wagner et al. 2011). Before the draft rule is published, “off the record” lobbying that occurs during the pre-proposal stage can shape the content of draft rules (Yackee 2012). West (2004) documents that before the notice and comment process begins, agencies “[devote] considerable time and effort to the critical tasks of defining an issue, considering alternatives, and selecting and justifying a course of action” (p. 74). 
    5. According to the Office of the Federal Register, when deciding how to revise a rule, “The agency must base its reasoning and conclusions on the rulemaking record, consisting of the comments, scientific data, expert opinions, and facts accumulated during the pre‐rule and proposed rule stages” (p. 6).
    6. Interview with the author.
    7. See Potter (2017), Bolton et al. (2015), and Yackee and Yackee (2010) for analyses of how strategic delay on the part of agencies, organizational capacity, and procedural constraints, respectively, influence the length of time it takes for rules to be finalized. 
    8. Repeals proposed under the authority of the CRA are not subject to filibuster in the Senate (Carey et al. 2016), meaning that a simple majority is required for approval in that chamber rather than the 60 votes needed to override a filibuster.
    9. The Congressional Research Service discusses exceptions to this process: “Figure 1 illustrates the basic process that most federal agencies are generally required to follow in writing or revising a significant rule. However, some aspects of Figure 1 do not apply to all rules. For example, as discussed later in this report, an agency may, in certain circumstances, issue a final rule without issuing a notice of proposed rulemaking, thereby skipping several steps depicted in the figure. On the other hand, some rules may be published for public comment more than once. Also, independent regulatory agencies are not required to submit their rules to the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA) for review, and no agency is required to do so for rules that are not “significant.” (Carey 2013, p. 1).
    10. https://www.huffingtonpost.com/2015/01/13/rewrite-no-child-left-behind_n_6464338.html
    11. https://www.edweek.org/ew/issues/adequate-yearly-progress/index.html
    12. https://obamawhitehouse.archives.gov/the-press-office/2015/12/10/remarks-president-every-student-succeeds-act-signing-ceremony
    13. https://www.federalregister.gov/documents/2016/11/29/2016-27985/elementary-and-secondary-education-act-of-1965-as-amended-by-the-every-student-succeeds
    14. https://www.edweek.org/ew/section/multimedia/key-takeaways-state-essa-plans.html
    15. See the Department of Education’s template for state accountability plans here: https://www2.ed.gov/admins/lead/account/stateplan17/index.html.
    16. See the draft rule and NPRM here: https://www.regulations.gov/document?D=ED-2016-OESE-0032-0001. See the final rule here: https://www.regulations.gov/document?D=ED-2016-OESE-0032-21017. Please note that the department also published separate regulations related to other aspects of ESSA, including the use of Title I funds and assessments, that are not discussed in this report.
    17. Commenters could submit their feedback via the online portal (www.regulations.gov), postal mail, commercial delivery, or hand delivery.
    18. The department summarized the major provisions of the rule as follows (see https://www.regulations.gov/document?D=ED-2016-OESE-0032-0001):
      • Establish requirements for accountability systems under section 1111(c) and (d) of the ESEA, as amended by the ESSA, including requirements regarding the indicators used to annually meaningfully differentiate all public schools, the identification of schools for comprehensive or targeted support and improvement, and the development and implementation of improvement plans, including evidence-based interventions, in schools that are so identified;
      • Establish requirements for State and LEA report cards under section 1111(h) of the ESEA, as amended by the ESSA, including requirements regarding the timeliness and format of such report cards, as well as requirements that clarify report card elements that were not required under the ESEA, as amended by the NCLB; and
      • Establish requirements for consolidated State plans under section 8302 of the ESEA, as amended by the ESSA, including requirements for the format of such plans, the timing of submission of such plans, and the content to be included in such plans.
    19. Establish requirements for accountability systems under section 1111(c) and (d) of the ESEA, as amended by the ESSA, including requirements regarding the indicators used to annually meaningfully differentiate all public schools, the identification of schools for comprehensive or targeted support and improvement, and the development and implementation of improvement plans, including evidence-based interventions, in schools that are so identified;
    20. Establish requirements for State and LEA report cards under section 1111(h) of the ESEA, as amended by the ESSA, including requirements regarding the timeliness and format of such report cards, as well as requirements that clarify report card elements that were not required under the ESEA, as amended by the NCLB; and
    21. Establish requirements for consolidated State plans under section 8302 of the ESEA, as amended by the ESSA, including requirements for the format of such plans, the timing of submission of such plans, and the content to be included in such plans.
    22. http://blogs.edweek.org/edweek/campaign-k-12/2016/11/ed_dept_releases_final_account.html
    23. In the House, the resolution passed 234-190. Not a single Democrat voted in favor, and only one Republican voted against the resolution (Patrick Meehan, Pa.). Four Democrats and four Republicans did not vote. In the Senate, the resolution passed 50-49. No Democrats voted in favor of passage, and one Republican (Rob Portman, Ohio) and two independents (Bernie Sanders, Vt., and Angus King, Maine) voted against passage. One Republican did not vote.
    24. http://blogs.edweek.org/edweek/teacherbeat/2017/03/trump_signs_bill_scrapping_tea.html
    25. Repeal under the CRA prevents an agency from publishing another “substantially similar” rule (Carey 2013, p. 16), meaning that the Department of Education would not be able to issue another rule interpreting the accountability and state plans provisions of ESSA.
    26. For analysis of mass comment campaigns, see Balla et al. (2018).
    27. Moreover, evidence suggests that agencies more heavily weigh sophisticated comments, which in turn suggests that organizations with the resources necessary to provide such input are typically more influential than individuals. For example, Mendelson (2011) notes that “a preliminary look suggests that agencies appear to treat technically and scientifically oriented comments far more seriously than what we might call value-laden or policy-focused comments” (p. 1359). Cuéllar (2005) similarly finds that agencies are more responsive to “sophisticated” comments. In turn, persuasive comments generally require a non-trivial investment of resources (Mendelson 2011, see discussion p. 1357-8). On this point, for example, Furlong (2005) argues that businesses and trade associations often have an advantage  due to their “greater budgetary and personnel resources” as well as better “access to information” (p. 288).  To the extent that organizations are better equipped than individuals to provide sophisticated or technical feedback, comments from organizations may be more influential in agency decision-making, and thus more relevant for the analysis here. Indeed, Mendelson (2011) observes that “In general, rulemaking documents only occasionally acknowledge the number of lay comments and the sentiments they express; they very rarely appear to give them any significant weight” (p. 1263-4).
    28. This analysis excludes comments identified as part of a mass comment campaign, comments submitted by individuals acting in a private capacity, comments submitted by staff members of an organization who were not clearly acting on behalf of an organization, and comments that contained general feedback that the research team could not identify as specific to a particular section of the proposed regulation. Please see the Appendix for further description of how the comments were identified for inclusion in this analysis.
    29. If a coalition of organizations submitted one comment, the organization type is coded based on the organization identified in the “Organization Name” category or the organization type listed in “Category.” If this information was not provided, the organization type was determined based on the signatories. Some organizations submitted more than one comment as a member of several coalitions.
    30. Despite constituting a voluble presence among comments submitted by organizations, only a small fraction of LEAs nationwide submitted comments. There were about 13,500 school districts in the 2015-16 school year (see Table 214.30 from NCES), and 100 LEAs submitted comments.
    31. Based on the research team’s observations while coding the comments, other organizations appeared to use this strategy as well. In addition, it is noteworthy that 29 percent of comments from professional associations were submitted by state affiliates of national organizations. It is possible that national organizations may have provided content or support to their state affiliates, which may have increased participation among state affiliates and/or generated consensus among state affiliates of the same organization. 
    32. On the other hand, it may be that agencies view organizations’ comments that rely on templates or language distributed by a parent organization differently from other comments, perhaps discounting the volume of this type of comment. However, is not clear whether this is the case. This is an area ripe for further research. While Mendelson (2011) notes that agencies may discount large volumes of form letters (see the discussion on p. 1363), it is unclear whether comments from organizations with expertise on the matter who use a template provided by another organization are discounted.
    33. Some commenters referenced the section number of the regulation that corresponded with their feedback, but many did not. To identify the regulation section that each mention referred to, the research team compared the language of the comment to the language of the regulation.
    34. If a section addressed the regulation at the fifth or sixth paragraph level or below, we do not treat it as a separate section if the commenter also addressed a section in the relevant fourth level. For example, if a commenter made a suggestion about Section 200.18.b.4 and 200.18.b.4.i, we count these as one mention at level 200.18.b.4. In addition, if a commenter made multiple points about one section, we coded that section using the preference for the strongest type of change (i.e. a section in which a commenter suggested a minor and a major change was coded as a major change; a section in which a commenter expressed support but also recommended a minor change was coded as a minor change).
    35. Please see the Appendix for details on this coding process.
    36. The “Other” category includes business, philanthropies, federal government, tribal government, and faith-based organizations, each of which contributed fewer than 10 percent of major change mentions on each of these sections.
    37. The majority of commenters in consensus 2 also expressed opposition to the inclusion of specific options listed by the department, consistent with the recommendations made by those in consensus 1.
    38. See Sen. Murray and Alexander’s comments during the June 29 hearing.
    39. See Sen. Franken’s comment during the June 29 hearing and Rep. Curbelo’s comments during the June 23 hearing.
    40. See, for example, Secretary King’s prepared statement for the June 29 hearing.
    41. See, for example, a critique of the department’s regulatory approach from Sen. Sheldon Whitehouse (D-R.I.) during the July 14 hearing: “[W]hat would be the best way for this committee, without seeking to add burdens or encouraging the U.S. Department of Education to add burdens, a tendency that seems rather strong in that department, to get a sense of the resurgence of curriculum in these schools that have been deprived of so much curriculum?”
    42. With a sample of 11 “typical” rules across three agencies, Golden (1998) finds that where there is more consensus among commenters, the agency is more likely to be responsive. With a sample of forty “everyday” rules across four agencies, Yackee (2006) finds that agencies increase or decrease the amount of regulation in the final rule in response to comments. Similar to Golden (1998), she finds that more consensus among commenters is more likely to produce the desired change from the agency.
    43. As defined by Yackee (2006), “everyday” rules receive at least two but fewer than 200 public comments. These are rules that pertain to the “everyday business of the bureaucracy” (Yackee 2006, p. 118) in contrast to high profile rules that generate more public input.
    44. Interview with the author.
    45. In addition to the specific sections discussed in detail here, the department was responsive to other concerns raised by Republican members of Congress. For example, the department eliminated a requirement relating to state content standards (section 299.16.a.1) that Sen. Alexander expressed opposition to during the July 14 hearing.
    46. https://www.regulations.gov/document?D=ED-2016-OESE-0032-21017