Sections

Commentary

From Hootie to Harry (and Louise): Polling and Interest Groups

Burdett Loomis
BL
Burdett Loomis Professor, Political Science, and Program Coordinator, Robert J. Dole Institute for Public Service and Public Policy, University of Kansas;<br> Editor of <a href="/press/books/esteemed_colleagues.htm"><i>Esteemed Colleagues: Civility and Deliberation in the U.S. Senate</i></a>

June 1, 2003

Hootie Johnson probably wouldn’t kick a ball out of the rough or mark down a par after making it a bogey. As the chairman of Augusta National Golf Club, home of the Masters, Johnson knows that, above all, golf is a game of integrity. So it is astonishing that Johnson would stoop tocommissioning and touting a poll that is such a blatant effort to con the public as the one released in support of his stand against opening Augusta to female members

The “Hootie Poll” is a mishmash of loaded statements and biased, leading questions that are unworthy of Johnson or Augusta. It is a poll that is slanted to get the answers they wanted, and in that it succeeded.

“It is enormously gratifying to see that a majority of Americans feel as we do,” Johnson said in a statement.

When Martha Burk, head of the relatively obscure National Council of Women’s Organizations, challenged the Augusta National Golf Club, home of the Masters tournament, to admit women to its membership, Hootie Johnson drew on his experience with the corporate lords of American industry and finance, whose names dot the club’s prestigious roster. In a public relations war, he reasoned, it’s best to have the public with you. So he commissioned a survey from The Polling Company, a woman-owned firm whose clients include a variety of Republicans and conservative corporate interests.

Although Kellyanne Conway, The Polling Company’s president, vigorously defended the poll, its wording left much to be desired. Respondents, for example, were asked to agree or disagree with these two items:

—Martha Burk did not really care if the Augusta National Golf Club began allowing women members, she was more concerned with attracting media attention for herself and her organization.

—In a way, Ms. Burk’s actions are insulting to women, because it makes it seem that getting admitted to a golf club is a big priority to all women.

Survey research experts in both sports marketing and public polling concluded that the language was fatally flawed. “This was a survey to support the views of the people who paid for it,” according to one marketing professional.

The Augusta National Golf Club does not represent a typical interest group in American politics. Nor was Johnson’s polling effort—which was ham-handed at best—representative. If the public relations war over the Masters were the norm, the subject of polling and interest groups would be the stuff more of situation comedies than of situation rooms. Most organized political interests have much to teach Johnson about the sophisticated ways in which they use survey research methods to shape the strategies and tactics of lobbying. How these interests ply their trade, both inside the Beltway and, increasingly, in state capitals, is also worth our attention.

Polling results can be exceptionally powerful, largely because of their seeming legitimacy as neutral evidence. Even though many people report being skeptical of public opinion polls, these surveys do, however crudely, appear to reflect some kind of underlying reality. After all, aren’t they “scientifically” conducted and accurate within a specific margin of error? Don’t presidential election surveys ordinarily get the results roughly right? Besides, polling results are reported in numerical formats. If we can quantify something, that ordinarily means that we can measure it with reasonable accuracy. And even the simplest poll permits the building of a narrative. Thus, when a newspaper needs a story about a campaign or an issue, it can order up a poll that will provide a handy narrative for even the least imaginative reporter—pretty much the USA Today model.

Lobbyists, consultants, and interest group leaders thoroughly grasp the value of controlling the story, and polling represents a major tool in establishing their themes—in driving how the press covers and how both the public and elites understand the subject at hand.

Organized Interests and Polling

Interest groups have long used survey research to provide support for their policy positions. Although rarely capable of doing their own polling, they regularly hire commercial firms to do the work. Indeed, directories of political consultants feature legions of polling firms experienced in performing such work. Increasingly, consulting firms have either established or purchased polling units. For example, the influential Washington-based Dutko Group recently added a public opinion research firm to its multiple lobbying and public relations units. Noted senior managing partner Gary Andres, “We’re finding that on the lobbying side, we want to try to provide a full array of advocacy tools for our clients.” From the lobbying perspective, then, survey research is an integral “advocacy tool.”

Surprisingly, perhaps, political scientists who study organized interest groups rarely discuss polling. In a recent survey of research on interest groups, for example, Frank Baumgartner and Beth Leech make no mention of polling or survey research. Either political scientists consider polling too routine to be worthy of serious study, or they have simply ignored a potentially important element of group politicking—or both. Their silence about polling probably also reflects the differing vantage points from which students of survey research and interest groups view American politics. At any rate, the frequency with which organized interests underwrite surveys and publicize their results suggests that their use of this tool is more than merely routine.

Interest groups use survey research in two distinct ways. First, they often commission a poll and employ the findings in lobbying campaigns—either publicizing the results to frame an issue or holding them privately to help shape lobbying tactics. Frequently, they do both, releasing some findings, while using others to forge a plan of action. Alternatively, organized interests may mine existing polling data for findings that back their positions. With tremendous amounts of such data available, many interests unable to afford their own surveys can dig through other polls to find supporting material for their lobbying efforts. For example, Sane Guns, a moderate, low-budget gun control group, has presented widely available National Opinion Research Center data that demonstrate extensive public support for limits on certain kinds of weapons ownership, even among gun owners. The availability of such information tends to level the playing field, in that most citizen groups cannot afford to commission a first-rate national poll. Nevertheless, most interest groups’ polling flows from commissions directed at specific issues and, frequently, specific populations.

Polling for Hire: Seeking Advantage

Survey research and lobbying appear to have quite different ends. The best polling practices work to eliminate all sources of bias, whether in question wording, sample selection, or post-survey weighting, while a lobbying firm comes at issues from a definite point of view, often with highly specific goals in mind. The job of a lobbyist is not to lie, of course, for breaking the bonds of trust with policymakers could kill a career in advocacy, built on mutual understanding. But advocacy is the job of the lobbyists, and their information will necessarily be “interested”—that is, framed to put their cases in the best possible light.

Organized interests use survey data in different ways as part of their lobbying strategy. Paradoxically, the more public the presentation, the more likely the groups are to manipulate the findings. The first way they use data is purely internal, much as a political campaign would analyze survey results in making tactical decisions. They also use survey data in private exchanges with legislators, regulators, and other decisionmakers. Finally, they disseminate the polling data widely in hopes of influencing opinion leaders or the public at large.

When they use the survey results internally, groups want the most accurate data available to help them make the best possible strategic and tactical decisions. So it was when Verizon hired Public Opinion Strategies to help it oppose breaking the company up into separate entities. Public Opinion Strategies subsequently claimed that through survey research it “tested awareness of the proposal and gauged public reaction to possible consequences to Verizon’s break-up.” It also determined “which messages about Verizon’s defense and which messages to deflate our opposition were more generally effective with the public. We used follow-up focus groups to fine tune the most compelling messages from the survey, and shape the language to better describe both the company and potential effects of the structural separation in our paid and earned media efforts [advertising and news coverage, respectively].” Any biases in the data would have reduced both the potential effectiveness of the Verizon campaign and the ability of Public Opinion Strategies to assess its own work. Even though survey results did influence the content of public appeals, the data remained privately held.

When lobbyists approach legislators, they use polling numbers to present their best case. Lawmakers understand the nature of their exchanges with lobbyists, whether on a continuing or a one-shot basis. They expect to be told a story that reflects favorably on the group being represented. To be sure, lobbyists may well provide “the other side of the story” or “the downside” of their position, but, in the end, the legislator wants and needs solid evidence to back the group’s point of view. Selected polling results can thus offer lawmakers valuable support for the position they may take and the way they may explain it. As one consultant stated, “Polling renders all opinions not equal.” That is, he continued, “It’s one thing for a lobbyist to tell a senator his/her constituents support energy deregulation or tax cuts or whatever. But it’s more persuasive to say 65 percent of registered voters or 95 percent of your primary constituency support the policy.”

Beyond the selective use of figures, such a statement makes clear the value of a group’s ability to underwrite its own survey. A key interest group may care little about the overall distribution of opinion on a specific issue; its concerns may lie with the breakdown in the ten congressional districts or the five states where lawmakers remain undecided on the issue. And the most useful survey data may not even be those that report constituents’ feelings about the policy at hand. Rather, senators and representatives are often most interested in the political consequences of their actions. Thus, the crucial constituent response is not to the policy question “Do you support the president’s tax cut proposal?” but to the political question “If Senator Smith votes for the president’s tax cut proposal, would you be more or less likely to support her in the next election?”

Organized interests that can afford to conduct such surveys have a leg up in lobbying legislators, for the polls give them critical policy and political data that are both precise (as to numbers) and targeted (state or district). Especially when lawmakers are relatively indifferent on an issue, such information can be powerful. “Why,” a member of Congress might reason, “do I want to go against the wishes of my constituents and my own political self-interest to oppose this issue?” Moreover, rival interest groups would have to decide whether to try to generate their own polling data, which might sway the legislator in their direction. In the end, merely commissioning the survey and releasing key parts of it to targeted legislators can win the day for an interest willing and able to make the investment.

And what of interest group surveys whose results are widely disseminated? If the paradox of polling holds, these results should be the least reliable. The evidence is mixed here. To be sure, many interests do release self-serving results in the Augusta National Golf Club mold. But at the other end of the spectrum are first-rate surveys, such as those conducted when a firm that regularly works for, say, Republican interests is paired with one that normally works for Democratic interests. Thus, Democratic pollster Celinda Lake often joins forces with GOP pollsters such as Linda DiVall, John Deardourff, or Lance Tarrance on issues that cut across the partisan or ideological divide. Under these circumstances, surveys on campaign finance or civic engagement are often released in great detail. But their results rarely become the fodder for lobbyists, in that “good government” issues such as campaign finance reform often split normal partisan or ideological allies.

Perhaps most important here are surveys conducted by highly reputable firms for groups with an axe to grind on a particular policy issue. Should we be surprised when polling conducted for the health insurance industry comes up with findings that support the industry’s favored policies? At the same time, organized interests must consider the value of commissioning such studies given the possibility that their findings will be discounted.

A February 2003 survey conducted for the American Road and Transportation Builders Association by Zogby and Associates, for example, found solid support for modest tax increases to improve American highways, even as gasoline prices were rising. The Zogby firm did not release all question-wording and technical information, but it did provide a 10-page report on its Web site. Overall, the survey’s focus on transportation and highway issues encouraged respondents to think about highways in terms of national security and governmental infrastructure. Thus, after answering 20 questions along these lines, the respondents were asked if they would support a two-cent increase in the gas tax to be used for “roads, bridges, and mass transit.” Heavily primed by earlier questions and statements, the respondents supported the gas tax hike by a margin of 64 percent to 34 percent.

Zogby did not ask the sample to choose between a higher gas tax and a tax for better health care or more military spending or a stronger social security system. Rather, the results flowed directly from the question sequence and from framing transportation in a national security-national economy context. The road builders group and its affiliated Transportation Construction Coalition (28 national associations and construction unions committed to increased federal transportation investment) got what they wanted—firm support for a gas tax increase, articulated in a professional, random sample survey. But had Zogby been hired by a group that opposed such taxes, would the results have been different? And if respondents to the highway and transportation poll had opposed higher gas taxes, would the road builders group have released the results? These are, of course, rhetorical questions.

In the end, polling is a tool, and a costly one. On occasion, intensive polling and market research focus substantial attention on the framing of issues, as with the health insurance industry’s “Harry and Louise” ads opposing the Clinton administration’s health care initiatives during the early 1990s. More generally, the use of survey research disproportionately benefits those who can afford the polling results. And for every Hootie Johnson, exposed by poor survey practices and extensive scrutiny, a dozen transportation coalitions get just what they pay for—a chunk of seemingly neutral, reasonably legitimate data that becomes part of the policy debate inside the Beltway, nudging policymakers in the group’s desired direction.