Skip to main content
A visitor fist bumps a humanoid robot at the booth of IBG at Hannover Messe, the trade fair in Hanover
Report

It is time to restore the US Office of Technology Assessment

Editor's Note:

This brief is part of the Brookings Blueprints for American Renewal & Prosperity project.

Blueprints for American Renewal & Prosperity

Contents


Summary

The past several decades have seen digital technologies revolutionize a number of different sectors. Advances such as artificial intelligence, machine learning, and mobile technology have reshaped the landscape and provided new ways of analyzing information, handling communications, and undertaking financial transactions. Yet 25 years ago, just as the digital era was unfolding, Congress terminated the Office of Technology Assessment (OTA) that provided legislators with research on new developments and recommendations for dealing with digital problems. At a time when Americans are worried about privacy, security, fairness, transparency, and human safety, it is time to bring back the OTA so that members have the latest advice on how to deal with these issues.

Back to top


Challenge

It was a heady time in 1995 when Republicans won a historic election that gave them majority control of the U.S. House for the first time in 50 years. After having campaigned on the Contract with America to downsize government, Speaker Newt Gingrich picked as one of his targets the U.S. Office of Technology Assessment (OTA), a congressional agency that provided members with impartial analysis of technology and science issues. It was time to eliminate that agency, he claimed, because it was ineffective and major tech decisions should be made by the private sector. Members of his party agreed with his stance and voted to get rid of the tech research and policy shop.1

Ironically, legislators killed that agency just as the internet was unfolding. Since that time, we have seen the digital economy flourish, the introduction of mobile phones, Wi-Fi networks in residential and commercial spaces, and people conducting many parts of their lives online. The speed and breadth of technology innovation has been stunning. There now are autonomous vehicles being tested on the highways in major American cities. Algorithms are making school assignment decisions in many districts. Using consumer’s online behavior, artificial intelligence is recommending products that people don’t even realize they want and getting them to purchase them.2

“At the very time when Congress needed advice on how to manage the consequences of powerful new technologies, an agency that could have been helpful in assessing new tools was mothballed.”

In the wake of the hands-off choice made by the government in the 1990s regarding the technology sector, many maladies also have unfolded, generally with very little oversight from government regulators: rampant spam, a loss of privacy, email hacking, sextortion, market concentration, social media manipulation, and hate speech, among others. Running through many of these problems is concern about the loss of human control over advanced technologies and fear that robots, AI, and automated software will enable bad behavior.

At the very time when Congress needed advice on how to manage the consequences of powerful new technologies, an agency that could have been helpful in assessing new tools was mothballed. Nonpartisan experts could have offered their views on how to handle privacy, security, bias, transparency, and human safety. Instead, policymakers lacked any systematic federal contribution to public discussions regarding digital technology’s upsides and downsides, or the role government should play in tech’s ongoing development. It was stunningly poor timing on the part of Gingrich and his fellow legislators at a crucial point in the digital revolution.

Back to top


Limits of historic and existing policies

Technology offers a number of cutting-edge advantages—many of which were unthinkable to lawmakers 25 years ago. Robots can relieve humans of dirty, boring, or dangerous jobs; by taking over mundane or repetitive tasks, this and other advances can augment human performance and make it possible for people to focus on higher-level activities. But at the same time, these developments raise a number of problems that perplex policymakers. Most technologies can be deployed both for positive as well as negative purposes. One example is facial recognition software, which can find missing children or lost relatives, but also be a tool for mass surveillance and discrimination.

“In a situation where technologies have varied and complicated consequences, it is important to have nonpartisan research and analysis of possible ramifications.”

In a situation where technologies have varied and complicated consequences, it is important to have nonpartisan research and analysis of possible ramifications. While certain entities exist to help Congress navigate these issues, they aren’t enough to keep up with the rapid pace of innovation. The Congressional Budget Office and the Congressional Research Service continue to offer reasoned analysis to lawmakers, but they aren’t focused on technology issues and must handle a range of other responsibilities. The Office of Science and Technology Policy exists to advise the president on related issues, but it is strictly embedded within the White House. This isn’t enough to prepare federal lawmakers, who should be able to draw upon a dedicated body for objective analysis of the effects of science and technology on domestic and international affairs.

Without an OTA-style office to offer guidance on a range of tech policy issues, Congress does not have sufficient ability to compile relevant data, analyze costs and benefits, and make informed recommendations for dealing with deleterious effects.

Back to top


Policy recommendations


The need for an advisory federal agency

For these reasons, it is crucial to restore the U.S. Office of Technology Assessment and use it to offer advice and recommendations on the many issues that have arisen about emerging technologies: workforce impact, AI ethics, AI bias, human safety, inequality, and governance. Each topic poses a number of challenges and illustrates how a restored OTA could help legislators grapple with the ethical and societal ramifications of digital technologies and ways to coordinate national policy.

To adequately prepare Congress for the next wave of innovation, a revived OTA should be funded and staffed commensurate with the magnitude of the sector it studies and the possible problems that need to be addressed.3 It’s worth noting that, at the time of its closure 25 years ago, OTA had 140 full-time employees and an annual budget of over $20 million.4

Having a federal agency devoted to technology assessment would help federal policymakers get the information needed to make sound decisions. Such an office could provide policy guidance on important tech problems and move the country toward a more effective stance on digitization.

What would OTA do?

An OTA could investigate the impact of tech advances on workers and the policy response required to protect income and benefits. How does technology affect employment? What skills are needed for 21st-century jobs and how are people going to get those skills? How is the nature of work going to be redefined by digital technologies? How will any increases in job churn affect worker incomes and benefits?

Most of the public worry over workforce impact concerns the loss of jobs. People fear robots will take many jobs and destroy individual livelihoods. There can be job losses in entry-level positions as firms automate routine tasks and apply computational processes to augment or replace human activities. The same can be true for professional workers as AI algorithms for reading X-rays and CT scans have improved to the point where they are very close to the accuracy levels of radiologists.

Yet there are workforce ramifications that go beyond job losses, such as job dislocation, job redefinition, job mismatch, and job churn. For example, there can be geographic dislocations as positions migrate to urban population centers clustered on the coasts and in a few metropolitan areas scattered around the heartland. Some positions will get redefined as AI performs tasks that currently are conducted by humans. There certainly will be new jobs created by technology, such as in data analytics and machine learning, but most people do not have the skills necessary to fill those positions; this will lead to job mismatches. And there could be job churn as people move from company to company.

In an economy where benefits are tied to full-time employment, any increase in job instability or churn would create insecurities in people’s ability to maintain their income, health benefits, and retirement programs. Moving to a new employer can mean a switch in health networks that necessitates finding new providers. It also can affect retirement benefits if the new firm requires one or two years of vesting at the company in order to qualify for matching contributions. If the person has several, short-term jobs, lengthy vesting periods can endanger future retirement income.5

These issues need to be addressed in order to ensure a smooth transition to a digital economy. Right now, the technology is far ahead of the public policy and that means we don’t have a good handle on the policy aspects of tech disruption. The longer leaders wait to analyze those issues, the more painful the social and economic transition is likely to be.

Improving AI ethics

OTA would conduct a sustained analysis of AI’s ethical and societal challenges. An OTA could employ ethicists, sociologists, lawyers, and other experts skilled at understanding these issues and making recommendations on how to mitigate problems. If we get these issues right, then the AI future looks bright. But if we fail to make good decisions, developments could spiral out of control rather quickly. We all need to think carefully about how to make the best choices, and a new agency would improve the caliber of information available to policymakers.

“An OTA could employ ethicists, sociologists, lawyers, and other experts … [to] conduct a sustained analysis of AI’s ethical and societal challenges.”

The world is seeing extraordinary advances in artificial intelligence. Innovations like machine learning and data analytics are the transformative technologies of our time. They are being deployed in many different sectors, such as health care, education, transportation, e-commerce, and national defense.

But AI raises a number of problems. Ethicists worry about the choices embedded in algorithms and whether software reflects basic human values. Among the issues they have identified include basic questions of fairness, bias, transparency, and human safety.6 There is fear about the fairness of algorithms, whether they promote bias and discrimination, the lack of transparency in how algorithms operate, and their impact on human safety.

Addressing AI bias

An OTA would make suggestions regarding ways to reduce bias and address discrimination. Its staffers could undertake research on tech applications in a variety of sectors, compile relevant data, and determine the nature of the problem. It then could develop proposals on ways to address bias and discrimination.

There already is evidence that AI furthers bias in automated decision-making. Because the algorithms rely upon historic data that are incomplete or unrepresentative, they reinforce pre-existing problems and increase bias in a number of areas. There have been examples from finance, health care, and education that demonstrate the scope of this issue and need to address it effectively.7

Maintaining human safety

A revitalized OTA would look at the possible risks, compile data on worker safety and vehicular accidents, and make recommendations on ways to protect human safety. As robots and automation augment or replace human activities, there needs to be attention to safety protocols to make sure they are sufficient given advances in digital technology.

Many fear a loss of human control over sophisticated technologies. They watch Hollywood movies and see “Terminator” robots with super-human powers. Combined with the rise of autonomous weapons systems, they worry that technology is careening out of control and ultimately will endanger humanity itself.

While there are long-term risks of runaway robots, the more immediate threats come from robots that are deployed in factories and warehouses, or autonomous vehicles that are deployed on highways. In warehouses, for example, it is important to separate robots that pick up orders from the humans who package the items. Fast-moving mechanical devices could endanger workers if their paths inadvertently collided. Ditto for autonomous cars that misread traffic conditions or driving circumstances and thereby lead to accidents.

Reducing inequality

A revived OTA would analyze these issues and make recommendations regarding ways to ameliorate income and geographical disparities. There are many roots of these problems and it likely will require changes in tax policy, budget allocations, workforce development, the social safety net, and infrastructure investment.

Technology furthers both income inequality and geographical disparities. It has increased inequality because it generates tremendous wealth but doesn’t create many tech jobs. Unlike the industrial firms of previous eras, large tech firms use internet platforms or software solutions to serve millions of customers, but don’t require a large number of employees. In addition, a number of them rely upon temporary workers or independent contractors, which means they don’t have to pay benefits, which keeps corporate costs down but elevates workers’ financial insecurity.

At the same time, there are pronounced geographical differences in where tech jobs are created. Most of the large firms operate on the East or West Coast or a few metropolitan areas in between. There are large parts of the country that have been left behind and do not have much economic activity.8 That creates obvious problems for economic development and financial sustainability in those communities.

Offering advice on future technology policy

The ultimate technology question involves basic governance and who should decide the future of technology policy. For much of the last few decades in the United States, the country has had a libertarian stance that has delegated major tech decisions to private companies. They have decided which products and services to develop, how to deploy them, and to whom to sell. Other than federal support for R&D, the result has been a relatively small role for government decisions in shaping the technology sector.

Now however, there is a growing “techlash” where the public has concerns and wants more oversight and regulation. There has been legislative action at the state and local levels. For example, California has passed a major privacy bill as well as new regulations on the classification of workers in the gig economy. And several cities have banned the use of facial recognition software by law enforcement. People see fundamental inequities with certain applications and want to see those problems addressed.

Back to top


Conclusion

To summarize, the federal government would benefit from a sustained and systematic effort to compile data, analyze problems, and make recommendations on technology policy. Legislators have lacked an organized entity for the last 25 years—and public policy has suffered as a result. Bringing back a reconstituted U.S. Office of Technology Assessment would offer an organizational mechanism to address tech problems and provide legislators with up-to-date information on this vital sector. With problems ranging from worker impact, ethics, and bias to human safety, inequality, and governance, it is time to restore this organization and provide guidance over the technology sector. This type of entity can play a vital role in charting an informed path for the future of U.S. technology policy.

Back to top

Footnotes

  1. Bruce Bimber, Politics of Expertise in Congress: The Rise and Fall of the Office of Technology Assessment, SUNY Press, 1996.
  2. Darrell M. West and John R. Allen, Turning Point: Policymaking in the Era of Artificial Intelligence, Brookings Press, 2020.
  3. Zachary Graves and Kevin Kosar, “Bring in the Nerds: Reviving the Office of Technology Assessment”, R St Institute, January, 2018.
  4. Independent Political Report, “Nader Proposes Reviving Congressional Office of Technology Assessment,” 2010.
  5. Darrell M. West, The Future of Work: Robots, AI, and Automation, Brookings Press, 2018.
  6. Darrell M. West, “The Role of Corporations in Addressing AI’s Ethical Dilemmas,” Brookings Institution, September 13, 2018.
  7. Nicol Turner Lee, “What the Coronavirus Reveals about the Digital Divide Between Schools and Communities,” Brookings TechTank, March 17, 2020.
  8. Clara Hendrickson, Mark Muro, and William Galston, “Countering the Geography of Discontent: Strategies for Left-Behind Places,” Brookings Institution, November, 2018.

More

Get daily updates from Brookings