The belief that AI dominance is imperative for economic development, military control, and strategic competitiveness has accelerated AI development initiatives across countries. The release of national strategic plans has been accompanied by billions of dollars in investment as well as concrete policies to attract relevant talent and technology. In our previous post “How different countries view artificial intelligence”, we presented a snapshot of governments’ planning for AI, based on our analysis of 34 national strategic AI plans. Our post covered the description of AI plans and categorized countries based on their coverage of various related concepts. In this post, we extend details about what accounts for the variation in countries’ AI plans.
These documents provide information about the planned use of AI within each country and what this information signals about the priorities of countries for AI. For example, some plans mention the use of AI for weaponization while others condemn AI-enabled wars; some capture details on ethical framework design, while others give few or no clues about how AI governance is ensured. The ability to correctly interpret these signals is useful for various internal and external stakeholders such as citizens, non-government organizations (OECD, UN, etc.), other countries, and policy analysts to predict the future trajectories of AI. For example, India’s AI plan signals concern for public consent by referring “Upon proper and informed consent from the citizens, these anonymized data may be shared for the purpose of artificial intelligence and data analytics” (India AI Plan, 2018). Similarly, France’s AI Plans emphasizes ethics by design, “Looking beyond engineer training, ethical considerations must be fully factored into the development of artificial intelligence algorithms” (France AI Plan, 2018).
However, not all signals are created equally, and signals can vary in intention and trustworthiness. This is similar to buying a used car: The used car seller may say that a car is highly reliable but then offer it for a price far below market value. This non-intentional (inadvertent) signal suggests that there may be a problem with the car that is not disclosed. Similarly, the used car seller may say that the car is highly reliable but then refuses to give any sort of mechanical warranty on the car, evidence that the signal is not trustworthy.
In much the same way, the signals in national AI plans require interpretation. Correctly interpreting signals allows the reader to understand the country’s motivations, such as why certain countries plan on weaponization or some are more concerned with the ethics of AI. Signals can be illuminating to understand the real drive for AI development. We classified signals into four categories:
- Traditional signals—Are both deliberate and true. An example of a traditional signal is where a plan mentioned the accurate amount of investment made in AI research in healthcare.
- Inadvertent disclosure signals—Transmit true information, but not sent deliberately. An example would be a country planning to spend a great deal of money on infrastructure, which reveals a belief that the country is deficient in that area.
- Opportunistic signals—Are not true but sent deliberately. A country might say it intends to use AI for public services but actually intends to use it for warfare-based systems.
- Mixed signals—Unintentionally transmits false information. An example of a mixed signal in AI plans is the declaration of using anonymized public data in AI systems but fails to do so.
Evaluating the plans
We did the analysis in two steps. First, we looked at the plans to determine their stated outcomes and looked for differences by type of government (e.g. democratic or authoritarian). Outcome data is the goals or outcomes for developing a national AI strategy. We identified five outcomes: AI Research, AI Data, Algorithmic Ethics, AI Governance and Use of AI in Public Services.
The results indicated that bolstering AI research and access to data required for AI system development are the most widely accepted outcomes among all countries. The group of countries releasing AI plans has shown that building capabilities for AI research and data accessibility are necessary preconditions to achieve the other outcomes.
Algorithmic ethics and AI governance have shown variety across countries. Democratic countries led authoritarian countries in addressing problems around algorithmic ethics and governance. However, among democratic countries, those with an immature technical environment were more likely than those with a mature technical environment to address ethics and governance. We speculate that this reflects countries’ inexperience with these issues playing “catch up”. These countries include New Zealand, India, Lithuania, Spain, Serbia, Czech Republic, Mexico, Italy and Uruguay. These findings also indicate that countries that are lagging technically in this area are focusing on ethics and governance not just because they are behind, but also because they are putting in the legal framework to defend against external use of AI and to shape the direction of their own internal developments. This strategy, while admirable, may slow the spread of AI from other countries that are not as aware of ethical and governance considerations.
Lastly, the results indicated that some countries, despite asserting an interest in using AI for public services, lacked supporting contextual factors. This casts doubt into the validity of the oft-stated objective that adopting AI at the national level is to improve the quality of life for citizens and suggests that other motivations are more likely.
Intended outcomes
In the second step, we sought to understand how to interpret the statements in the plan and assessed them based on their intentionality (did they mean to say something) and their veracity (was what they said supported by other evidence). If a piece of information given in AI plans aligns with contextual conditions, we referred to it as validated information.
Based on the results of our analysis, we developed the intentionality and veracity matrix of five outcomes. The first two outcomes, AI Research and Data, are placed in a deliberate and high veracity quadrant as countries expressed the intention to develop both the capabilities and generation of these outcomes is also validated by the contextual conditions. Similarly, Algorithmic Ethics and AI Governance are also validated by the contextual conditions, but the intention of the information is not deliberate (not expressed in plans but found in contextual conditions). The use of AI in public services is deliberately shown by democratic countries, but the information is not validated by contextual conditions and thus placed in the deliberate intention and low veracity quadrant. The authoritarian countries’ intention to use AI is inadvertently depicted but the veracity of information is low and thus categorized in the last matrix (inadvertent and low veracity signals).
Signal Veracity | |||
High | Low | ||
Signal Intention | Deliberate |
Research Data |
Public Services (democratic) |
Inadvertent |
Algorithmic Ethics Governance |
Public Services (authoritarian) |
Since the geopolitics of AI is changing with the advancement of the technology, unexpected trends have started to emerge. The results indicate that democratic countries have signaled greater concern for ethics and governance of AI. However, among such democratic countries, those with less mature technology environments are more concerned about the human-centric use of AI. This phenomenon is a direct indication that merely developing AI capabilities and systems would not be sufficient for sustainable deployment. Human-centric AI enabled by ethical and governance issues is also needed. This insight indicates the beginning of a new trend where the competition for AI dominance will be gradually replaced by a sustainable equilibrium. The long-term deployment of AI is possible with public trust and strengthening the belief that technology exists to serve humans and not to be served by them.
In many ways, the rush to AI is similar to the westward expansion of the U.S. in the 1800s, when numerous people and companies raced across the plains to establish outposts in the American West. On the surface, the settlers claimed to be doing so to fulfill America’s destiny of bringing education, technology and modernization to the “uncivilized west” but the reality is that many settler had motivations that were far less altruistic. We see this happening with national AI strategies as the plans reflect all manner of honorable goals for racing to AI implementation but major variation in intentionality and veracity of the plans. Much like westward expansion where settlers from Eastern US were induced to various economic incentives to move in far west area. This develops analogy with often reached beyond the legal or ethical constraints of the state, efforts to stress AI governance, ethics, and fairness need to curtail the bleeding edge of technological innovations. Westward expansion eventually ended, but not without many significant clashes and conflicts. Like the “wild west”, the race to AI is far too large to be managed by one country or even a group of countries, and there likely is to be massive upheaval before the situation is resolved.
Commentary
Analyzing artificial intelligence plans in 34 countries
May 13, 2021