Sections

Commentary

More than a glitch: lessons from the NYSE outage

On July 8, separate glitches halted trading at the New York Stock Exchange (NYSE) for over three and a half hours, delayed hundreds of United Airlines flights, and brought down the Wall Street Journal’s home page. The incidents turned out to be nothing more than unrelated technical issues and despite significant media coverage sources agree that the glitches turned out to be relatively harmless. Nonetheless, this episode reminds us of the need to improve the nation’s cyberinfrastructure in order to prevent similar events in the future.

Security versus Infrastructure

Cybersecurity is frequently talked about in the media and for good reason. In an annual report to Congress, the Office of Management and Budget notes that it recorded close to 70,000 cybersecurity incidents within the federal government’s network and over 570,000 additional incidents in 2014. Another issue worthy of our attention is the antiquated nature of the country’s cyberinfrastructure that powers everything from our personal computers to the highly technical software of the NYSE.

The National Institute of Standards and Technology (NIST) published a report on the issues surrounding inadequate software architecture in 2002. The report notes that, “the complexity of the underlying software needed to support the U.S.’s computerized economy is increasing at an alarming rate.” NIST finds that the annual cost of inadequate software testing is close to $60 billion, while the potential cost reduction from feasible infrastructure improvements could save over $20 billion per year. The report, which is a decade old, highlights how little progress officials have made tackling this problem. These costs will only increase as the world becomes even more dependent on software.

Automation and risk prevention

Increased automation can result in significant public benefits. Just last week, TechTank published an article that illustrates the potential benefits of automated vehicles, such as lowering public costs and reducing traffic. This is not to mention that since Google began testing its driverless car prototypes six years ago, the fleet of 20 cars has been in fewer than 11 accidents, none of which have been attributed to system failure. However, increased automation heightens the potential risks and costs associated with glitches resulting from faulty software architecture. Events such as those last week could become more frequent and potentially even more dangerous. Yet there is much being done. Researchers at MIT have developed a computer program that can fix old code faster and more reliably than expert engineers. Microsoft, taking a more creative approach, is experimenting with a system that could track brain waves and eye movements in order to identify when programmers are most at risk for creating bugs, or software mistakes.

One of Murphy’s Technological Laws states: “If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.” While this analysis is hyperbolic, it does emphasize an important point; the software that we create is not immune to mistakes. In light of recent events, it would serve us well to improve cyberinfrastructure before the repercussions become too costly.

Joseph Schuman contributed to this post.