Sections

Commentary

We have SDGs now, but how do we measure them?

Does the world agree on the definition of a mountain? Is internet access a “basic service” for all people? Should the same-sized cities in China and Jamaica be held to the same standards?

Last week in Bangkok, 28 statistics experts from around the world discussed these questions in order to figure out how the new Sustainable Development Goals can be measured. The 17 goals include things like working toward inclusive and equitable education for all (goal 4). Of course, declaring such an admirable goal is the easy part; it’s much harder to figure out how to measure such things, and thus know if progress is being made.

In Bangkok, members of the so-called Inter-Agency and Expert Group (IAEG-SDGs) convened from their national governments around the world and, in collaboration with others working in statistics and global development, the collective task was to identify quantifiable, numerical indicators for each of the SDGs ratified last September by the United Nations member states—and for each of the 169 sub-targets that fall under those 17 goals. The deadline is March 2016.

Even before Bangkok, several drafts of proposed indicators had been shared, so that going into the meeting there were already over 200 proposed indicators with varying levels of agreement. There were indicators categorized as green, meaning there was general agreement and only slight changes were needed; yellow, meaning there were some unresolved issues; and gray, meaning more in-depth discussion and methodology development is needed. Only the yellow indicators were to be discussed in Bangkok and turned to either green or gray by the expert group on a consensus basis. Sounds easy, right?

First on the agenda: Education

SDG 4 was the first up on the agenda, having only one yellow indicator up for discussion: target 4.2, which reads, “By 2030, ensure that all girls and boys have access to quality early childhood development, care and pre-primary education so that they are ready for primary education.” The group of experts rejected the initially proposed indicator—“percentage of children under 5 years of age who are developmentally on-track in health, learning, and psychosocial well-being”—as difficult to define and measure. Instead, an indicator on participation in early learning was suggested and endorsed by many countries. But then UNESCO-UIS, UNICEF, and the OECD jumped in, stating that the target is indeed measurable (via initiatives such as MICS and other upcoming efforts). It was thus decided to keep the developmentally on track indicator but also add the one on participation. Those of us concerned that a participation indicator alone could result in the scaling-up of education programs of poor quality (as we sometimes saw with the earlier Millennium Development Goal 2) can now rest a little easier. For now—the green indicators still have one more round at the chopping block to survive before March.

The only other SDG 4 indicator not green lighted was indicator 4.7 on “Education for Sustainable Development,” which was labeled gray and not discussed at this meeting. Country representatives felt that further work was needed to make the indicators meaningful and relevant. Also of interest to the global education community is that indicators 12.8 and 13.3 (addressing educational curricula covering sustainable development and lifestyles and climate change, respectively) were mostly forgotten by education stakeholders. Despite covering education curricula, they are outside of SDG 4, and so were moved to the gray area and no clear proposal for global measurement was offered.

The challenge of maintaining an apolitical, efficient process

We were told that this was a technical meeting, not a political one, which is purportedly why the statistical experts were on the main stage and it was held on the opposite side of the planet as their U.N.-mission counterparts. However, this did not prevent politics from sneaking in. The representative from the Holy See was silent during the entire meeting except to threaten member states that if indicator 5.6 (a green one and thus not up for discussion) on “Proportion of women (aged 15-49) who make their own sexual and reproductive decisions” was adopted, there would be dire consequences. The representative from the United States was obliged to point out that the indicators referencing international conventions such as the Convention on Biological Diversity would not be applicable in countries like the U.S., which has not ratified them.

Even with limiting the number of indicators to be debated at the meeting, the agenda proved too ambitious for three days (including two 12+ hour days). While on the first day, one indicator was discussed for an hour-and-a-half, the final goals discussed (including critical indicators on human rights and peaceful societies) on day three were reduced to six-minute discussions with no interventions allowed from U.N. agencies or observers. There was a sense that the process was inconsistent, with some indicators being greenlighted with some ambiguity, while others with the same degree of ambiguity were turned gray. The process was continuously questioned by participants during the indicator discussions, and rules seemed to be made on the spot, leading one statistician to reference pop culture on the many “shades of gray” these indicators were beginning to take on.

After several pleas from observer countries and civil society for a more open process to decide on the final framework, expert group co-chairs announced that an online portal for giving feedback on the green indicators, previously planned to be closed to non-members, would be open for three days sometime between now and November 20.

Finalizing and implementing the indicators

Opening up the platform represents a positive move toward inclusiveness and a realization that measuring (and more importantly, implementing) these SDGs is going to require new types of collaboration. Many of the issues debated simply cannot be determined in a U.N. meeting room, whether it is in Bangkok or New York. The world needs a chance to “try on” these indicators, provide feedback as to how they are working, and use that information to keep refining them over time.

Let’s face it—national statistical offices are not exactly hotbeds of innovation. The data revolution for SDGs is unlikely to be incubated within the halls of traditional statistical agencies, but rather by practitioners, civil society, the private sector, and even the beneficiaries themselves. If this agenda is to truly be transformative, we can’t be bound only by what is measurable in 2015. There needs to be a clear mechanism for providing space to those who are innovating on measurement, and a review process on the indicators that includes experiences and perceptions from around the globe. One possibility is a platform, organized by indicator, where governments, researchers, civil society, and other experts can share experiences and tools for measuring the SDGs, including in the many areas where there are few or no existing tools. Over the next 15 years as new ways of measuring the goals are tried out, these learnings could inform the formal review process led by the United Nations. Finally, it is important to keep focus on those who benefit from the SDGs the most: the children, the poor, and the marginalized. Unless these indicators are used to significantly improve their lives and opportunities, it won’t matter how we define a mountain, a basic service, or a city—we will have failed in this ambitious agenda any way you calculate it.