Sections

Commentary

Teaching the public about machine learning

Fears about the impacts of artificial intelligence often hinge on who or what is responsible for decisions that a computer makes. As shown in the recent case of the Tesla Model S driver who was killed in a collision with a tractor trailer while his car was in Autopilot mode, it is not clear how automotive software works. It may be the case that artificial intelligence systems like the ones in driverless cars cannot explicitly state why they make certain decisions. Many rely on machine learning techniques that make decisions based on input data.

Machine learning algorithms find patterns in existing data and apply those patterns to new data.  Beyond translating text into different languages, progress has been made in identifying pictures: the best facial recognition programs have nearly surpassed humans in their level of accuracy. In 2016, a machine learning program called AlphaGo defeated the reigning world champion in the strategy game Go, which has more possible configurations than the number of atoms in the universe. Computers now possess an incredible capability to rapidly analyze enormous quantities of data and make decisions based on that data.

Machine learning’s explanatory limitation, though, comes from its mimicry of human learning. The philosopher Michael Polanyi encapsulated this problem in his observation “we know more than we can tell.” After years of practice that a human takes to become an expert at a given task, they may have adopted habits based on their own experience, independent of any formal training. These acquired habits are integral to their success, but it would be difficult to impart that experience to someone else. Machine learning automates and accelerates the process; any decision is an inference based on a large number of previous examples.

A driverless car must sort through a multitude of inputs in order to make decisions in real time. In the case of a collision, determining why a driverless car made a decision based on available inputs becomes extremely important for assessing liability and preventing future incidents. Correcting problems with a machine learning system is not as simple as finding a line of errant computer code, however. Adapting machine learning for an application like driverless cars will require extensive testing to account for all potential driving scenarios.

Machine learning marks a significant advance in computing, but a lack of public understanding might delay the widespread adoption of this technology. Any software can have limitations, and it falls on developers to explain these limitations to policymakers and the public in terms that are easy to understand. Now that machine learning has vastly improved our ability to analyze data, more effort should be given to explaining the process itself. Applications like driverless cars offer a glimpse of what is possible with machine learning, but only if it can earn the public’s trust.