Who is evaluating US schools?

Students arrive for class at Mahnomen Elementary School in Mahnomen, Minnesota

Last week, GreatSchools unveiled its new approach to rating the country’s public schools—and, with it, released an updated set of color-coded, 1-10 school ratings. Millions of American parents are sure to see these new ratings; GreatSchools reports more than 55 million unique visitors to its website last year, and its school ratings are now embedded in popular real estate websites like Zillow, Trulia, Redfin, and

While the change to GreatSchools’ approach to rating schools is notable itself—an improvement, in my view, on what preceded it—it also provides an opportunity to step back and consider the influence of third-party (nongovernmental) school ratings. The higher education community has fretted for years about the influence of the annual US News and World Report “Best Colleges” rankings. However, ratings from national, third-party organizations in K-12 have proliferated much more quietly, with much less discussion of their implications. K-12 ratings seem less likely to distort schools’ behaviors than college rankings, with schools more focused on state accountability measures than third-party ratings. But many of the same concerns apply: the reductive nature of summative ratings and limited data available for evaluation, among others. And the potential impact is remarkable, with millions of families judging school quality each year as they choose schools and neighborhoods.

An easy way to get a sense of the reach of nationwide, third-party school ratings is to Google one of your local public schools. You’ll almost certainly find a host of companies and nonprofits eager to tell you what they think about that school. Among the first 10 search results I see for my local public middle school:

  • a GreatSchools page with a color-coded, 1-10 GreatSchools Rating (“a multi-measured reflection of school quality”) that sits atop similar ratings of the school’s academics, equity, and other characteristics;
  • a SchoolDigger page that gives the school’s SchoolDigger Rank among my state’s 314 middle schools, with an accompanying star rating;
  • a Niche page with a color-coded, A-F Overall Niche Grade, along with related A-F grades for academics, teachers, and diversity;
  • a StartClass page with a 1-10 StartClass rating inside of a color-coded apple; and
  • two real estate sites showing the 1-10 GreatSchools ranking, along with a map of the school’s attendance zone.

All of these search results appear before the first state or district website showing school information or performance data. It makes one wonder what these third-party organizations are, what kind of impact they have, how they could possibly make reasonable evaluations of virtually every public school in the country, and whether—based on their interests—reasonable evaluation is their goal.

GreatSchools is the best known of the group, a nonprofit organization that is partially funded by philanthropic foundations active in education and, according to a stunning claim on its website, runs a website that was visited by “over half of American families with school-age children” last year. (Full disclosure: I have partnered with GreatSchools staff for research in the past and found them—without exception—thoughtful, smart, and well-intentioned.)

SchoolDigger describes itself as a service of Claarware LLC, a “one-person software development shop.” Niche is a Pittsburgh-based company that developed from what was College Prowler as it broadened its focus to include information about neighborhoods and K-12 schools. Finally, StartClass comes from a technology company, Graphiq, which provides a “network of sites to research and compare thousands of products and services.”

These sites differ in their data sources, methodologies, and which characteristics of schools they purport to evaluate. They also differ in their transparency, leaving unanswered questions about how several of the sites calculate their ratings. This lack of transparency is not surprising—or perhaps even inappropriate—given that some of these ratings are the core products of for-profit companies. What is clear, however, is that there is considerable demand for ratings as families make judgments about schools.

This adds yet more complexity to one of the most difficult, contested questions that has confronted state policymakers working on ESSA plans: whether to create ratings for individual schools. (See EdWeek and The 74 for nice overviews.) I have argued that summative school ratings are unavoidably reductive and flawed, incapable of doing justice to the rich, multifaceted work of schools—and yet we still have to weigh their problems against the challenges that arise in their absence. In this case, if states decline to develop school ratings through a democratic process, third-party providers will happily fill that void with the only ratings that parents might find.

We should worry whether the purveyors of these nationwide ratings have the expertise, perspective, and incentive to produce sensible ratings. We should also recognize the vulnerability in leaving space for groups with particular interests—e.g., advancing the cause of one type of school (district, charter, or private)—to engineer ratings that quietly give preferential treatment to certain schools. To be fair, these issues have arisen in state policy processes as well (twice in Indiana alone), but the policy process ensures at least some amount of public input and transparency.

There is no great way to do school ratings, but there are better and worse ways to do them. Parents look for help when they are formulating opinions of their children’s schools and the schools their children might attend. We should be aware of where they’re looking, who is providing answers, and what it means for U.S. schools.