Sections

Commentary

Key public policy issues for cognitive computing systems

Without a doubt, cognitive computing systems are a hot topic around boardrooms, executive suites, and conference tables at major technology firms, which are investing both financial and human resources to bring these systems to fruition. At the same time, government offices throughout the world are holding similar discussions, and these efforts are starting to come to fruition as well.

Cognitive computing systems mine both structured and unstructured data to offer hypotheses and solutions for consideration by humans. They thrive on massive amounts of data: greater availability yields better analysis. In addition, cognitive computing systems rely on humans to train them through supervised learning. Domain experts interact with the system to provide input on decision-making patterns and outcomes, and over time, it learns to mimic these features and improve accuracy.

In the city of South Bend, Indiana, cognitive computing systems found more efficient routes for untreated wastewater that avoided substantial fines and a $120 million wastewater treatment overhaul.

In New York, fifteen U.S. hospitals are collaborating with New York City’s Memorial Sloan Kettering Cancer Center to take advantage of cognitive computing and conduct what they refer to as a “clinical trial on personalized medicine” in hopes of using an individual’s genomic data to detect tumor vulnerabilities. Based on these results, oncologists at these institutions are able to create highly individualized drug regimens to treat patients with glioblastoma—a rare and deadly brain cancer.

However, much like the FAA with drone usage and state and federal transportation departments with self-driving cars, the public sector is already finding itself far surpassed by the technology and the private sector’s use of it. At present, cognitive computing systems receive scant attention from either a public policy or legal perspective.

There are some rays of light.

In the European Union for example, the Legal Affairs Committee has urged the EU Commission to put together a set of regulations and guidance for the use of robotic systems (which includes cognitive computing). Specifically (and after referencing Mary Shelley’s tale of Frankenstein), the Committee calls for the establishment of a European agency for robotics and an accompanying code of ethical conduct that focuses on the employment, tax and social impacts of the technology.

Within the United States, the White House’s National Science and Technology Council issued a report in October 2016 that captured the current state of artificial intelligence[1] and its potential uses within society. It ended with some broad recommendations for federal agencies to address the potential ramifications of artificial intelligence systems. A companion report was issued in December 2016 that further discussed the impact of artificial intelligence on the U.S. job market and outlines some policy responses. Among other things, the report recommends:

  • Investing in and further developing artificial intelligence in the private sector, highlighting its value in cybersecurity and fraud detection in particular
  • Educating and training Americans for jobs that leverage artificial intelligence
  • Aiding workers in the transition to an artificial intelligence-centric future

The private sector is already organizing. In October 2016, several major technology companies formed the Partnership on Artificial Intelligence to address the security, privacy and ethical challenges presented by artificial intelligence. The group funds research into AI and pledges to establish industry “best practices” to deal with this complex and evolving domain. As stated by Mustafa Suleyman of Google, “the positive impacts of AI will depend not only on the quality of our algorithms, but on the level of public engagement.”

While there is a lot of movement from the private sector, we still do not know how to treat cognitive computing from a policy perspective. There are quite a few questions to explore as these systems mature and adoption picks up.

In separate blog posts released over the next few weeks, we will begin to explore the key questions such as:

  • How to audit cognitive computing systems to ensure safety, particularly when used to automate functions rather than simply augment human actors?
  • At what point during these systems’ learning phase are they ready for deployment?
  • What are the policy issues associated with disclosing the usage and algorithms of these systems, particularly when they act contrary to expectations?
  • How do we build “early warning systems” to highlight when cognitive computing systems start to fail?

A great deal of study is necessary to understand the legal and policy ramifications of cognitive computing systems. In studying them, we are reminded of the Longfellow poem about the girl with the little curl: “When she was good, / She was very, very good, / And when she was bad she was horrid.” We have already seen this regarding the proliferation of data across social networks that both create the good (simultaneous connections across countries) and the horrid (the proliferation of “fake news” during the U.S. presidential election).

We invite you, our readers, to follow the conversation and to share your thoughts on the issues. We look forward to our conversations.

Google is a donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

[1] Cognitive computing systems are generally considered a part of the suite of systems that make up the field of artificial intelligence

Authors