Advancing inclusive development in rural towns

LIVE

Advancing inclusive development in rural towns
Sections

Commentary

Understanding Artificial Intelligence

Artificial Intelligence (AI) has arrived, with decades of academic research now coming to fruition. Endless applications of AI already exist, only to become more sophisticated and ubiquitous as the field progresses and commercialization continues. Today Siri enables our internet searches, self-driving cars can be seen on the streets of Mountain View and IBM’s Watson beats humans at Jeopardy.

The academic field of research, birthed in 1950 by Alan Turing’s seminal paper “Computing Machinery and Intelligence”, has developed rapidly over the past decade. At a simple level, machine learning – referred to more broadly as AI – collects data, processes it, and attempts to produce reasonable, actionable outputs.

Over time, the individual scientific disciplines behind AI have become more complex and specialized. Today areas of study including Bayesian methods, computational neuroscience, natural language processing, neural networks and reinforcement learning represent only a handful of the many subfields involved. This multidisciplinary work will continue to evolve both incrementally and in a step-function towards the pursuit of human-level AI. With humble beginnings in rote task automation, AI will very soon exhibit intuitive, seemingly emotional capabilities.

Deep impact

As consumers, we should expect AI technology to permeate all aspects of life within a few short years. Auto manufactures have offered cruise control for decades, reducing cognitive effort for the human operator. New car models include automated lane-keeping and parallel parking. Soon, the concept of a human driver may be as foreign as the human loom operator.

As citizens and policy-makers, we must also understand and plan for what’s ahead. AI’s progression from irrelevance, to human enhancement, to human substitution will proliferate throughout the economy and impact all industries, including those that today appear unassailable.

Technology has repeatedly altered how society and economies function at the most fundamental levels. With exponential progress in the fields most related to AI development – namely computing and software – we can only expect this pattern to maintain or strengthen. This raises serious ethical, regulatory and policy questions about the costs and benefits unlocked through this technology.

Recent press has even highlighted the risks associated with AI becoming “super-intelligent”: not just equaling or outperforming humans in certain fields, but leaving humans in AI’s intellectual dust across every domain. In this scenario, our ability to contribute becomes virtually insignificant in all but the most artisanal of sectors. While this seems like the worst-case scenario, it’s not. If AI has goals different from ours, it may view humanity as a problem to be solved on the path to an optimal world, creating a doomsday scenario.

While building an all-powerful AI seems avoidable, some AI philosophers predict that once human-level AI exists, its ability to self-improve will rapidly cascade into super-intelligent AI if unchecked. Given the advantages it would by definition have over us, a malevolent, superhuman AI would be a hard thing to reign in. The consequence is that we may have only one chance to design AI to be “human-friendly”. A reliable solution to this “control problem” is arguably one of the biggest unsolved AI problems today.

It goes without saying that AI technology and its applications should be guided to enhance our economy and society to the greatest extent possible, while minimizing catastrophic risk. The open question is how best to do this in a world of tradeoffs.

A policy regime

As in most technology fields, the future of AI largely rests in the hands of the individuals, corporations and institutions at the vanguard of research, development and productization. Still, there are policy levers that can bend the arc of progress toward our goals; these include regulation, standard-setting, funding and public policy. Some are likely to work better than others.

At a high level, regulation includes governmental rules or directives that restrict how entities can legally operate. This policy lever works best in sectors where the relevant actors are known – or at least identifiable – and their operations sufficiently understandable such that compliance can be evaluated. AI development appears to be one of the sectors most resistant to this type of regulation: its actors are almost indistinguishable from other tech companies, seemingly requiring only computers, electricity, engineers and caffeine to forge ahead.

A secret or concealed AI project would be an easy thing to imagine. In fact, dozens probably exist today, whether bootstrapped or funded by public or private financiers. US investors have backed companies operating in regulatory gray zones before, and locating overseas outside US jurisdiction would hardly discourage foreign sources of capital. When there is both a moral and financial argument for pushing the envelope, it’s only a matter of time. The rising on-demand, distributed economy offers many examples.

Even if regulation were successfully implemented, it might have negative unintended consequences. Ostensibly, its goals would be to prevent or slow the development of unsafe or harmful AI. However, the costs of compliance would likely slow teams trying to build AI according to the law, while the ease of avoidance for those shirking regulation might give the latter parties a net advantage. If the first AI to cross the super-human tipping point ultimately dominates, this is a very bad trade to make.

A more effective approach might be standard-setting, which could provide clear, robust and proven guidelines for AI researchers to utilize on beneficial AI efforts. If developed properly, vetted by peers and critics, and popularized by the press, policy leaders and think-tanks, these AI safety frameworks could converge and save the responsible R&D teams considerable time, effort and internal debate in choosing a safe architectural path.

Perhaps the most promising solution is public funding for AI development projects determined to meet both the “beneficial” and “safe” criteria. Extra resource infusions in the form of cash, human talent or previously classified technologies could provide a massive boost to the chosen projects whether focused on fundamental research, solving the control problem, or commercialization. To ensure the playing field tilts in the right direction, expected reward for good teams applying would need to far exceed the opportunity cost. This may be one of the few problems we can only spend our way out of.

A new Manhattan Project

Aggressive, governmental funding of AI technology may on the surface seem optional but the underlying dynamics are complex. Because of the advantages a superhuman AI will confer to its “owner” and the existential risks it would simultaneously present humanity, many of the dynamics begin to resemble those behind the race toward an atomic bomb. Without an unprecedented level of international cooperation, a new effort paralleling the Manhattan Project might become the logical path forward for many nations and coalitions.

As reckless as this path might seem, if advanced AI is inevitable then the only real variable is its timing. Just as nuclear weapons would have been less dangerous in a world without ICBMs, there are safety arguments favoring AI breakthroughs sooner rather than later. For example, unfriendly AI would be less potent without advanced nanotechnology or bioengineering tools ready and waiting for misuse.

If the US chooses this path along the razor’s edge, a coalition of public and private organizations – including groups like DARPA and The Future of Life Institute – could help determine the organizations and individuals to allocate extra resources toward. This government-sanctioned approach could simultaneously increase the odds that any end-state AIs benefits society, as government would have a view on technology just over the horizon, providing more time to plan and craft policy.

The path forward

Even in a world where super-human AI is never developed, public policy will need to address the increased leverage, automation and labor substitution already on our doorstep. Unchecked, income disparity will likely grow larger due to AI technology alone. Effective solutions may come in the form of new markets, improved public benefit systems, novel concepts, or perhaps even AI itself.

In today’s world where the capacity of AI systems is steadily increasing, we face an uncertain future. The decades ahead may the most prosperous times in human history, or we could go down a much darker path. We can’t simply “wait and see”. We need to actively plan for the future we want, before someone or something else decides for us.