Sections

Commentary

Why did people stop responding to federal economic surveys? What can be done?

Shutterstock/Indypendenz

“For many household surveys in the United States, response rates have been steadily declining for at least the past two decades.” This is a quote from a National Academies of Sciences report from 2013. It is still true today, and it is true for all wealthy countries. Suffering from low response rates and increasing costs, surveys are often described as 20th century technology that needs to be replaced.

But surveys capture things we cannot get from administrative data, as Census Bureau Deputy Director Ron Jarmin noted at a recent event. While administrative data could provide a person’s employment and earnings, only surveys can determine (for example) whether someone was looking for work, which is key for measuring the unemployment rate.

Declining survey participation, both in the U.S. and abroad, is often raised as a large challenge for the statistical system and cited as a reason to eliminate surveys in favor of other measurement strategies. But rather than discard this important data source, researchers should seek to understand how response rates impact the statistics we care about and why response rates are falling in the first place.

The key issues for how survey participation impacts economic statistics is whether lower response rates lead to less statistical precision and whether they actually create statistical bias. Lower response rates lead to lower sample sizes and thus less precision, but the statistics may well remain unbiased so long as differences in survey participation are not correlated with the economic outcome we are measuring. Statistical bias is a larger concern because policymakers would be reading economic signposts that are literally pointing in the wrong direction.

There are many plausible reasons why survey response rates are declining. Among these are the difficulty of contacting the individuals who are (randomly) chosen for the survey sample, respondent concerns about the time burden of completing a survey, and respondent fears about the privacy of their personal data. These difficulties are not unique to government economic surveys, and although the challenges may be getting worse, the unique role of economic surveys means we need to move forward using tried and true methods for improving survey participation.

Trends in survey response rates: How bad is it?

In recent years, government economic surveys have hit historically low response rates. In particular, in November 2025 the U.S. Current Population Survey (CPS)—the labor force survey used to produce the unemployment rate among numerous other datapoints—had a response rate of 64%, the lowest it’s been in recent history. Other household surveys have experienced similar declines in response rates; as Figures 1 shows, rates have been steadily decreasing since 2015. Other research has pointed out the alarming fall in survey response rates dates back even further, as several major surveys experienced 70%-80% response rates in the 2000s (see for example Williams and Brick 2018; Massey and Tourangeau 2013; and Groves 2006).

Responses to business establishment surveys are also declining. However, a couple of surveys, the Current Employment Survey (CES) and the Consumer Price Commodities and Services survey, have experienced a leveling off of the response rates. Figure 2 shows the CES, used to calculate total employment, with the response rate hovering about 40%. One recent presentation pointed out that business surveys are on life support due to the lack of response from business to participate in surveys and an explosion of alternative economic data available from the commercial sector. However, this presentation also highlighted the fact that by the third month of contacting businesses, almost all replied to the survey.

Do falling survey response rates matter?

Declining survey response rates increase the risk of inaccuracies in statistical information. The first type of potential inaccuracy is “precision,” which tells us how much confidence we should have that the survey statistic is a true reflection of the population we are studying. The second type of potential inaccuracy is “bias,” which tells us that the computed sample statistic may be a misleading estimate for the population we are studying.

Federal surveys are conducted using random and representative samples of individuals, and the larger the sample the better the precision. In practical terms, precision means that when we see (for example) that the unemployment rate is 4.3%, what we really know is that (based on current sample sizes and response rates) the unemployment rate is between 4.0 and 4.6% with 90% probability. With a smaller pool of survey respondents, that 90%range is larger, which means we would be less confident about the unemployment rate in any given month. Currently, the Federal Reserve and other policymakers use estimates of the unemployment rate that have a margin of error of  +/– 0.3 percentage points on the unemployment rate estimate, but what if that were +/- 0.5 percentage points, or even higher? Indeed, not long ago there was a risk that the sample for the CPS would be reduced, stimulating a large response from the research community concerned with the resulting smaller precision.

Precision matters for economic statistics, but the more serious concern is statistical bias. If a select group of people is both less likely to respond to the survey and has different economic outcomes, then the sample statistic is not representative of the population without some correction. For example, we know that unemployment is higher for young people. Young people are less likely to respond to surveys, so if we don’t correct for that differential, our estimate of unemployment for the overall population is biased down. In fact, these sorts of corrections for differential response rates are standard practice in the production of survey-based economic statistics.

Are declining survey response rates introducing new biases into widely cited economic statistics? Probably not. To deal with this possibility, the Office of Management and Budget (OMB) in 2006 published its Standards and Guidelines for Statistical Surveys, in which they recommend that investigators carry out studies to estimate the level of nonresponse bias whenever the response rate for a survey falls below 80%. Since then, several economic survey programs have conducted these mandated studies with little evidence of bias In fact, one estimate shows that nonresponse bias may not even occur until the response rate falls below 30%.

This is not to suggest that declining response rates are unimportant for key economic surveys, just that the statistical agencies must monitor and remain vigilant when response rates decline. For example, one recent study used the CPS linked to tax data to examine the income reports for respondents and nonrespondents. They found that bias did not exist prior to the COVID-19 pandemic, but since 2020 there has been a relationship between nonresponse and income that leads to an upward bias in the income estimates. In turn, the Census Bureau released adjusted sample weights to remedy this newly discovered bias.

What can be done?

Although declining survey response rates have probably not as yet affected key economic statistics, survey participation does pose a broader threat to the future of the statistical system. The biggest fear is that loss of precision and increased bias (or even a widespread belief those have occurred) combined with the high cost will lead the statistical agencies to simply abandon surveys, leaving key questions about the economy unanswered. The alternative is reversing the trend in survey response rates by targeting the three key challenges to survey response listed above: difficulty contacting the individuals who were (randomly) chosen to be in the sample, respondent concerns about the time burden of completing a survey, and respondent concerns about the privacy of their data.

Recently, it has been more difficult for Census interviewers to contact survey respondents, both in-person and by telephone. People are busy and perhaps now more reluctant to engage with interviewers they do not know, and it’s often a lot to ask that they carve out time to complete the survey for which they are selected. As such, advance letters and other communication about the importance of responding to the survey are useful. Such advance contact can also be used to assuage fears about how their data will be used and kept private. Indeed, the decennial Census demonstrates the importance of advance contact every ten years with a large media campaign (in 2020, it was a half billion dollar campaign) encouraging everyone to respond to the Census.

For many potential respondents, even effective advance contact may not be enough to overcome their reluctance to participate in government economic surveys because it is burdensome. Even with interviewers explaining the importance of the surveys, they could still be long, or even difficult, to answer. To help address this issue, OMB has issued rules under the Paperwork Reduction Act that instructs agencies to decrease survey burden. There are tried and true strategies for overcoming respondent burden. First, survey administrators should focus on collecting essential information—the answers to questions that we cannot get elsewhere. Researchers often need background information (like age and other characteristics) to contextualize survey responses, but if it is possible to fill in that information using administrative data, that would allow for shorter surveys and thus reduce respondent burden. Second, survey administrators need to make participation as easy as possible. Indeed, as part of the joint effort between the Bureau of Labor Statistics (BLS) and Census to modernize the Current Population Survey, the survey administrators are considering using an online survey option to encourage self-response, simplifying survey questions, and using administrative data for the CPS supplements,. Finally, although respondents to large-scale economic surveys like the CPS and CES participate voluntarily, other private and government surveys often use incentives—small gift cards or entries into a drawing for a larger prize—to compensate participants for their time.

The final hurdle for many potential survey participants is keeping their data private. Although datasets containing identifying characteristics are used only for summary tabulations and never released to the public, this is still a very real concern and not to be dismissed lightly. With decreased trust in the federal government, some respondents may even worry that providing sensitive data to their own government may be used against them. This is why the statistical agencies have data stewardship policies enforcing privacy and confidentiality.

The solutions to data privacy concerns involve first being careful about what questions are asked on surveys, avoiding unnecessary questions that respondents associate with privacy concerns. But researchers often need answers to questions about income, wealth, and other economic outcomes that many people consider sensitive. Hence, for all survey questions the agencies must remain vigilant about the data handling protocols. All agencies, and their interviewers and staff, take an oath of confidentiality for life, with extreme enforcement mechanisms in place. A successful survey program requires that respondents know their data will be held in confidence and cannot be used against them.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).