Geopolitical implications of AI and digital surveillance adoption

Surveillance cameras are seen at the headquarters of China's Hangzhou Hikvision Digital Technology Co Ltd in Hangzhou city, east China's Zhejiang province, 22 May 2019.China's Hangzhou Hikvision Digital Technology Co Ltd takes cybersecurity seriously and abides by applicable laws and rules wherever it operates, the China Daily newspaper quoted the company on Thursday as saying. No Use China. No Use France.

Executive summary

The increasing sophistication and spread of artificial intelligence (AI) and digital surveillance technologies has drawn concerns over privacy and human rights. China is indisputably one of the leaders in developing these technologies both for domestic and international use. However, other countries that are active in this space include the United States, Israel, Russia, multiple European countries, Japan, and South Korea. U.S. companies are particularly instrumental in providing the underlying hardware for surveillance technologies.

In turn, these technologies are used in a range of settings. Some of its most severe use cases include helping to spy on political dissidents, and enabling repression of the Uyghur and Turkic Muslim populations across China. However, concerns arise even in its more “mundane” uses, which include one-to-one verification at banks and gyms. The higher quality of the data collected can help companies improve the accuracy of their facial recognition technology. Over time, these increasingly effective technologies can be used elsewhere for authoritarian purposes.

The United States and partner democracies have implemented sanctions, export controls, and investment bans to rein in the unchecked spread of surveillance technology, but the opaque nature of supply chains leaves it unclear how well these efforts are working. A major remaining vacuum is at the international standards level at institutions such as the United Nations’ International Telecommunication Union (ITU), where Chinese companies have been the lone proposers of facial recognition standards that are fast-tracked for adoption in broad parts of the world.

To continue addressing these policy challenges, this brief provides five recommendations for democratic governments and three for civil society. In short, these recommendations are:

  • The U.S. and its allies should demonstrate that they can produce a viable alternative model by proving that they can use facial recognition, predictive policing, and other AI surveillance tools responsibly at home.
  • The State Department should work with technical experts, such as those who convene at the Global Partnership on AI, to propose alternate facial recognition standards at the ITU.
  • The United States and like-minded countries should jointly develop systems to improve the regulation of data transfers and reduce risks.
  • The United States and partner democracies should subsidize companies to assist with creating standards to propose at bodies such as the ITU.
  • The National Science Foundation and the Defense Advanced Research Projects Agency should fund privacy-preserving computer vision research, where computer vision is deriving information from images or video.
  • Civil society organizations (CSOs) should engage in outreach efforts with local communities and community leaders to strengthen public discourse on the advantages and disadvantages of using AI in policing and surveillance.
  • CSOs should engage in or support research on issues related to rights abuses using AI and digital surveillance technologies and the export of these technologies.
  • CSOs should actively participate in the setting of international technology standards.


  • Acknowledgements and disclosures

    Lori Merritt and Ted Reinert edited this paper, and Rachel Slattery provided layout. The authors would like to thank Steven Feldstein, Brian Kot, Chris Meserole, and Charles Rollet for helpful feedback on drafts.