Sections

Commentary

Can We Trust the Polls?: It all depends

Can we trust the polls? Under the best of circumstances, the answer is “Not necessarily without a fair amount of detailed information about how they were conducted.” This general note of caution applies at any time to any poll consumer. But today, with polls proliferating in the media and with methodological concerns increasing within the polling industry, caution is even more warranted. This is not to suggest that the general quality of polling data is declining or that the problems facing pollsters have no answers. Still, consumers of polling data need to be more careful than ever.

Proliferating Polls

In a period of rapidly advancing technology and falling costs for computers, long-distance telephone service, and statistical software, it is easier than ever for start-up companies to get into the polling business. Because most polling now takes place on the telephone, it is cheap and easy for someone who wants to get into the polling business to buy a sample, write a short questionnaire for a Computer Assisted Telephone Interviewing (CATI) application, buy interviewing services from a field house, and receive a report based on the marginals for each question and a limited set of cross-tabulations.

As a consequence, the opportunity to see the results of a poorly conducted poll has become more frequent, even if we can’t assess exactly whether the probability of seeing one has changed. The problem is exacerbated because journalists and others who report on public opinion are not generally well trained in assessing poll results and thus cannot always weed out “bad” poll results before they enter the news stream and become “fact.” So the risk is growing that local polls on national or local issues may be less well conducted or less well reported than those conducted by major national organizations.

Neither poll consumers nor journalists who write about polls have access to quality-control criteria or certification processes by which to assess specific firms or individuals. As a result, all must rely on news organizations to evaluate polls on the basis of the standards of disclosure of poll results adopted by organizations like the American Association for Public Opinion Research (AAPOR) and the National Council of Public Polls (NCPP). And they should report any concerns they have about such items. Information thus made available on details such as sampling, question wording, field dates, and response rates is useful for the few informed poll consumers who can interpret it.

Declining Response Rates

Falling response rates are a concern for the entire survey research industry, whether academic researchers, political consultants who work for candidates, or news organizations. Recent compilations of response rates in telephone surveys by the Council for Marketing and Opinion Research suggest that studies with short field periods are now averaging about 10 percent, although most media polls have response rates in the 30-45 percent range. Although analysts have identified many factors behind this long-term trend—such as the negative impacts of telemarketers posing as pollsters and the increased use of various call-screening devices—we don’t yet understand well how much each contributes to the overall decline. Researchers are also beginning to understand that declining participation rates probably affect different kinds of political polls in different ways.

For preelection polls that project the outcome of a race, preliminary research suggests that the same factors that may lower participation in surveys may also lower participation in elections. Declining response rates thus do not seem to pose dangers to the accuracy of estimates of the outcome of recent presidential elections. More research will help clarify whether declining participation will affect preelection estimates in lower-turnout elections held in nonpresidential years or whether over time it will have different effects on future preelection estimates.

Preelection polls are unusual in that their accuracy can be checked against the outcome of the election itself. (That characteristic may create a misplaced confidence in polling generally, since similar external validations do not apply in many other polling situations.) When it comes to polling on issues of general government policy, we do not know the potential impact of declining survey participation rates because we have no way to check the accuracy of the polls. For example, when polls assess the public’s response to or appraisal of policies such as military action in Iraq or a proposed tax cut, there is no equivalent independent way to measure the validity of the measurements. There is, however, some suggestion that policy polling results may reflect more conservative or Republican views than are present in the population as a whole—a bias that would not be surprising because Republicans have long been known to be more likely to vote than Democrats (a fact accounted for in the likelihood estimators used by most polling firms).

Emerging Technology

Many polling organizations embrace new technology as a way to cut costs and speed data collection. Some new technologies also make it possible to collect more types of data. Web-based surveys, for example, can employ visual or audio stimuli that are not possible with other questionnaire designs, making them an excellent way to evaluate political commercials, especially when applied in a full experimental design. Many organizations have also turned to Web-based surveys to reduce the turnaround time between the design of a questionnaire and the start of data analysis and production of a first report of results.

Applied inappropriately, however, this technology offers several potential pitfalls for data quality. First and foremost are sampling issues related to respondent selection. Pollsters obtain respondents in three ways. They take “volunteers” who self-select themselves to answer generally available questionnaires on a Web site. They recruit volunteers, sometimes for a single survey and sometimes for a panel from which subsequent samples will be drawn. And they use a probability sample to select respondents on the telephone and supply Internet access to those who need it.

Because the availability of Web connections is not uniformly or randomly distributed in society, the existence of a “digital divide” can introduce one source of bias in volunteer samples. This technique, for example, tends to produce samples that are more Republican and with more conservative leanings, as we have seen in such varied circumstances as post-debate polls in 2000 and more general public policy assessments since. The resulting bias tends to favor the current Bush administration and could work against a Democratic administration. Other possible problems include fatigue from the requirement to respond to periodic and frequent surveys to maintain panel status—a requirement that could even lead, in some circumstances, to “professional” respondents. More research needs to be done on these issues, but at a minimum a poll consumer ought to know about respondent recruitment and selection for Web-based surveys.

Pollsters must also contend with the rise of cell phones. Despite the increasing penetration of these devices in the United States (approaching 75 percent), fewer than 5 percent of Americans rely solely on a cell phone. But that share is growing—and presenting pollsters with a new set of problems. First, cell phone exchanges have no general directory, and they are excluded from samples that most public polling firms can buy. Second, people who rely on cell phones are more mobile than the rest of the population, and many use phones provided by their business. If, as is likely, the geographical correspondence between the phones’ assigned area codes and their owners’ place of residence is poor, it may or may not be an issue for firms conducting surveys with national samples, but it could be for those conducting state or local surveys and effectively dialing out of their target area.

One further problem linked to new technology is telephone caller ID. This screening device, which alerts households to who is calling, makes it possible to avoid calls from “out of their area” or from unfamiliar numbers. In response to citizens’ pleas for protection from telemarketers, the federal government is moving to develop a “do not call” list. Pollsters need not honor such list membership now, but future abuses by pollsters or telemarketers could change that. This technology too is exerting downward pressure on response rates.

New Voting Methods and Preelection Pollsters

Preelection pollsters face two relatively new problems, both of which they can manage by devoting more financial resources to their work. Whether firms will be willing to pay more to collect data with less error or bias remains to be seen.

For almost 10 years, new administrative procedures have been allowing Americans to change the way they cast their ballots. Increasingly, citizens are voting before election day—or rather, as it is coming to mean, “Vote Counting Day.”

Through procedures such as “early voting” (where machines are set up in convenient locations such as malls or shopping centers as early as three weeks before election day), voting by mail (where every registered voter is sent a ballot up to 20 days before election day), and permanent absentee registration (where voters can ask to be mailed a ballot in advance of election day without indicating that they will be out of town), more and more voters are casting ballots early. In the 2000 election, about one-sixth of the national electorate voted early, and the share is growing. In selected states, the proportion can be much greater. This trend is also facilitated by other administrative changes, such as election day registration, whereby citizens can decide at the last minute that they want to vote, even if they have not previously registered.

These developments do not mean that preelection telephone polls are outmoded or will fade away. They do suggest that telephone pollsters will have to use hybrid designs that include different screening questions (Have you voted yet? Are you registered to vote in the upcoming election?). Voter News Service used such techniques in past elections, as did firms in large states—such as California, Texas, and Florida—with many early voters. Eventually telephone polls may be supplemented by exit polls of voters leaving early voting sites. Such problems are not insurmountable, but they imply added expense as well as the need for more sophisticated designs, which will likely complicate modeling the outcome of elections based on more and more disparate data sources.

A second issue for preelection pollsters—one that cropped up in the 2002 election—is the development by the Republican Party of 24-Hour Task Forces to counter union-based get-out-the-vote campaigns. Volunteer recruits were solicited on the Internet to make at least three calls in the final 72 hours of the campaign to encourage likely Republican voters to get to the polls. The effectiveness of these efforts has not been analyzed systematically, but they may have been of use in at least some states, especially in the South. The difficulty is that preelection pollsters, especially those linked to newspapers, traditionally poll up through Friday or Saturday to produce a story for Sunday’s paper. Because their polling typically ends just as these mobilization efforts get under way, their polls could underestimate the Republican share of the vote. Pollsters could counter the problem by extending the period for preelection polling, even through Monday evening, but that would defy a set of news-making norms about the best time to publish stories about the campaign to reach the largest audience. And it also would increase data collection costs.

Exit Pollsters

Exit polls serve two distinct functions: they provide information on who won the election, and they explain that success. Exit polling was developed as a technique to intercept individual voters as they left the polls in samples of precincts. Since the late 1960s, exit polling has given electronic media a big edge in election coverage, including the ability to produce news about outcomes long before all votes have been counted. But the information collected and analyzed by exit pollsters is useful not only for projecting races but also for analyzing voter intent and behavior by attitudinal characteristics as well as a full set of demographic factors.

Much work remains before the news media are prepared to use exit polling to analyze voting in the 2004 presidential election. The work is crucial, for the exit polls—in their explanatory function, not their predictive one—have become an independent voice of the public in explaining what happened and why. Without that voice, journalists and political commentators will revert to the old style of political reporting, relying on party officials and strategists to provide the causal links and explanations for why one party or candidate succeeded over another.

During the past two elections, the Voter News Service exit poll operation was dogged by two problems. Each was identified as part of the post-2000 reviews both internally and in congressional testimony, and neither is yet solved. The first was whether the administrative software and hardware that had been supporting VNS (still a DOS-based system for a mainframe computer) were up to the task. The second, one that still has not received much discussion, was whether the statistical modeling of the VNS system was adequate. For the VNS projections were based not only on exit poll interviews, but also on statistical modeling that incorporated a detailed history for each precinct, including information on turnout and partisan division of the vote over time; prior estimates of the outcome of each race, based on intelligence that included preelection polls and a variety of expert assessments; and, in very close races, raw vote totals. Although these models were conceptually appropriate, they did not use the very latest statistical theories and models, as does, for example, Britain’s BBC.

Soon after election day 2002, the networks began another review, eventually deciding to disband VNS. In its place, they have agreed to support a new exit poll operation, the National Elections Pool (NEP), to be headed by Warren Mitofsky and Joe Lenski, two veterans with both national and international experience. But more public discussion is needed about whether to revise the statistical models for the new operation.

The task for Mitofsky and Lenski is not enviable. The old VNS software system, whatever its current ownership and availability, was not up to the task of handling large amounts of data with current technology. And in 2002 VNS on its own was unable, even with almost two years’ notice, to develop a new system to produce relatively few estimates for an off-year election. The coming presidential caucuses and primaries give the new team less than a year to develop another new system to produce a larger number of improved estimates-though Mitofsky and Lenski were able to provide such a service for CNN in 2002 for 10 states through their “RealVote” system. And because the new team is likely to focus primarily on data collection and processing, it will devote less time and effort to the estimation models.

Whatever happens, the new organization is still likely to produce the only estimates of outcomes and explanations, and that too is a problem. At least one other source should collect data independently to produce another estimate of the outcomes. The other source should also use a different questionnaire, in theory reflecting additional news judgments about appropriate content. The two estimates together would provide a fuller explanation and more reliable estimates of what happened and why.

The Internationalization of Polling

The recent military action in Iraq has increased news organizations’ interest in what foreign publics, especially those in the Middle East or in states such as Afghanistan and Pakistan, think about the United States and its policies. But the polling industry in these regions is not yet well developed and typically relies on samples drawn from a few major urban areas rather than countrywide. The National Council of Public Polls has recently suggested that the issue may often be a pragmatic one for the data collection firm. In addition to cutting travel costs, these simpler designs may also reduce translation and language problems. But the resulting data also require journalists wishing to report about what others think of Americans to be careful about the level and type of generalizations they draw.

More Polls, More Problems

Public opinion polls, frequently conducted and with results that are widely disseminated, are one distinguishing feature of a healthy democracy. They provide a means for citizens to communicate with their elected representatives, and vice versa. But their value in this regard depends on the collection of high-quality data, well analyzed and appropriately interpreted.

Of late there has been a step-function increase in the availability of polls, accompanied by issues of potential reductions in quality. Such developments are not unprecedented: new technologies have before, and will again, become available to produce data faster and cheaper, while the resulting savings are not devoted to reducing various kinds of error. No one yet fully understands what the consequences might be of the various problems outlined above. Poll consumers, as ever, have no recourse but to pay as much attention as they can to where the data came from and how they were analyzed.