Artificial intelligence (AI) technologies have improved rapidly over the past decade,1 largely driven by advances in machine learning, which is closely related to data science and statistical prediction.2 Several aspects of the health care system involve prediction, including diagnosis, treatment, administration, and operations. This connection between machine learning’s capabilities and needs of the health care system has led to widespread speculation that AI will have a large impact on health care.
For instance, Eric Topol’s “Deep Medicine: How Artificial Intelligence can make Health Care Human Again,” highlights AI’s potential to improve the lives of doctors and patients. The progress and promise of clinical AI algorithms range from image-based diagnosis in radiology and dermatology to surgery, and from patient monitoring to genome interpretation and drug discovery. There are dozens of academic and industry conferences dedicated to describing the opportunity for AI in health care. For example, AI Med and the Ai4 Healthcare Summit are two of many conferences dedicated to facilitating the adoption of AI in health care organizations. ML4H and CHIL, in contrast, provide forums for scholars to present the latest advances in academic research. The major medical journals have all dedicated space to research articles and editorials about AI. These sentiments have been detailed in numerous reports from nonprofits, private consultancies, and governments including the World Health Organization and the U.S. Government Accountability Office.3
In 2019, 11% of American workers were employed in health care, and health care expenditures accounted for over 17% of gross domestic product. U.S. health care spending is higher per capita than other OECD countries.4 If AI technologies have a similar impact on healthcare as in other industries such as retail and financial services, then health care can become more effective and more efficient, improving the daily lives of millions of people.
However, despite the hype and potential, there has been little AI adoption in health care. We provide an early glance into AI adoption patterns as observed through U.S. job advertisements that require AI-related skills. Job advertisements provide a window into technology diffusion patterns.5 As a technology evolves and spreads across application sectors, labor demand adjusts to include the type of skills required to adopt the technology, up to a point when the technology is sufficiently pervasive that such skills are no longer explicitly listed in job postings.
Figure 1 shows the percentage of U.S. job advertisements that require AI-related skills by industry (defined by two-digits NAICS codes) for the years 2015-2018.6 This data, collected by Burning Glass Technologies,7 is based on over 40,000 online job boards and company websites. At the top of the figure is the information industry, which includes large technology companies such as Google and Microsoft. More than 1 in 100 of all jobs in the information industry require some AI-related skills. Professional services and finance also rank relatively high. The next few industries—manufacturing, mining, and agriculture—may be a surprise to those that have been less focused on how AI has enabled opportunities in robotics and distribution. At the bottom is construction. Just above construction is health care and social assistance, where 1 in 1,850 jobs required AI skills. The relatively low rate of AI in job postings is not driven by social assistance.8 Even for the relatively-skilled job postings in hospitals, which includes doctors, nurses, medical technicians, research lab workers, and managers, only approximately 1 in 1,250 job postings required AI skills. This is lower than other skilled industries such as professional, scientific, or technical services, finance and insurance, and educational services.
The skills listed in job postings are just one measure of technology adoption. Still, they allow for a systematic comparison across industries. While we expect these numbers to rise over time—both in and out of health care—health care appears to lag. This suggests a puzzle. How can we reconcile the hype around AI in health care with the relatively low rate of adoption?
Barriers to adoption of AI in health care
Our starting point is to understand how AI adoption in health care might vary with attributes identified as central to technology adoption. What lesson can we draw from observing prior waves of technological adoption in health care?
A first-order attribute emphasized by much of the literature is the role of complementary innovations in the successful adoption of AI and other information technology by companies.9 For example, the successful adoption of electronic medical records required innovation in integrating software systems and involved new processes for doctors, pharmacists, and others to interact.10 Human capital management software was most effectively deployed when firms also changed their processes for performance pay and human resources analytics.11 Internet adoption involved changing contracts with supply chain partners.12 These complementary innovations take resources and expertise, and so they tend to be easier in larger companies and in larger cities. Therefore, because the necessary complementary innovation is less expensive in large companies and large cities, we expect to see more AI adoption in larger health care organizations and in larger cities.
To analyze this hypothesis in the context of AI adoption in health care, we focused on 1,840,784 job postings by 4,556 different hospitals. These included 1,479 postings that required AI skills from 126 different hospitals—Burning Glass Technologies identifies a comprehensive list of job postings that are categorized as requiring “AI skills,” with examples including “Analytics Architect,” “Bioinformatics Analyst,” “Cardiac Sonographer,” “Physician – Internal Medicine,” and “Respiratory Therapist.” Overall, 60% of these AI jobs were clinical, 34% were administrative, and the remaining 6% were primarily research.
With just 1,479 AI job postings, the main conclusion of the analysis has already been stated: Surprisingly few jobs in health care required AI-related skills. Consistent with the work on other information technologies, the 126 hospitals that posted these AI jobs have more employees and are located in larger cities. While it is still early in the diffusion of AI, this result is no surprise. Just like electronic medical records, computers, and the business internet, AI adoption is more likely to start in big firms and big cities.13
In order to understand the kinds of complementary innovations that might lead to more adoption of AI in hospitals, it is useful to understand why hospitals might hesitate to adopt. Four important barriers to adoption are algorithmic limitations, data access limitations, regulatory barriers, and misaligned incentives.
Algorithmic limitations
Advances in neural networks pushed forward the possibility boundaries of AI at the cost of interpretability. When neural networks are used, it is often difficult to understand how a specific prediction was generated, meaning without substantial effort, some AI algorithms are so-called “black boxes.” As a result, if there is no one proactively looking to identify problems with a neural network-generated algorithm, there is a substantial risk that the AI will generate solutions with flaws only discoverable after they have been deployed – for examples, see work on “algorithmic bias.”14 This lack of transparency can reduce trust in AI and reduce adoption by health care providers, especially considering that doctors and hospitals will likely be held accountable for decisions that involve AI. The importance of complementary innovation in trustworthy AI, for example through technologies or processes that facilitate AI algorithm interpretation, is widely recognized. There are several large-scale initiatives that focus on developing and promoting trustworthy AI.15 Interpretable AI might increase trust by eliminating the black box problem, allowing health care workers to understand how AI reaches a certain recommendation. Others are innovating in developing clinical trial standards for AI systems.16 These innovations are likely to facilitate the adoption of AI in health care because it would allow health care professionals to better understand the likelihood that an AI reached its recommendation in a biased or incomplete manner.
Data access limitations
The performance of AI algorithms is also contingent on the quality of data available. Thus a second barrier to adoption is limited access to data. Medical data is often difficult to collect and difficult to access. Medical professionals often resent the data collection process when it interrupts their workflow, and the collected data is often incomplete.17 It is also difficult to pool such data across hospitals or across health care providers. Electronic Healthcare Record (EHR) systems are largely not compatible across government-certified providers that service different hospitals and health care facilities.18 The result is data collection that is localized rather than integrated to document a patient’s medical history across his health care providers. Without large, high-quality data sets, it can be difficult to build useful AIs. This, in turn, means that health care providers may be slower to take up the technology.
Regulatory barriers
Some of the algorithmic and data issues derive from underlying regulatory barriers. Three types of regulations are particularly important. First, privacy regulations can make it difficult to collect and pool health care data. With especially strong privacy concerns in health care, it may be too difficult to use real health data to train AI models as quickly or effectively as in other industries.19 Second, the regulatory approval process for a new medical technology takes time, and the technology receives substantial scrutiny. Innovations can take years to navigate the approval process. Third, liability concerns may also provide a barrier as health care providers may hesitate to adopt a new technology for fear of tort law implications.20 Regulation in health care is, appropriately, more cautious than regulation in many other industries. This suggests that reducing barriers to AI adoption in health care will require complementary innovation in regulation, ultimately allowing opportunities from AI to be realized without compromising patient rights or quality of care. Complementary regulatory innovations could include changes to all three regulatory barriers: who owns and can use health care data, how AI medical devices and software are approved, and where the liability lies between medical providers and AI developers.
Misaligned incentives
Innovation in algorithmic transparency, data collection, and regulation are examples of the types of complementary innovations necessary before AI adoption becomes widespread. In addition, another concern that we believe deserves equal attention is the role of decisionmakers. There is an implicit assumption that AI adoption will accelerate to benefit society if issues such as those related to algorithm development, data availability and access, and regulations are solved. However, adoption is ultimately dependent on health care decisionmakers. Not infrequently, medical professionals are the decisionmakers, and AI algorithms threaten to replace the tasks they perform.
For example, there is no shortage of warnings about radiologists losing their jobs. In 2016, Geoff Hinton, who won computer science’s highest award, the Turing Award, for his work on neural networks, said that “We should stop training radiologists now; it is just completely obvious deep learning is going to do better than radiologists.”21 This prediction was informed by the very promising advances of AI in image-based diagnosis. Yet there are still plenty of radiologists.
Why has Hinton’s prediction not yet come to pass? The challenges include lack of trust in the algorithms, challenges in data collection, and regulatory barriers, as noted above. They also include a misalignment of incentives. In our study analyzing AI adoption through job postings, we find that adoption indeed varies by type of job and by hospital management structure. AI skills are less likely to be listed in clinical roles than in administrative or research roles. Hospitals with an integrated salary model, which are more likely to be led by individuals who have focused their career on management and take a systematic approach to administration, have a higher rate of adoption of AI for administrative and clinical roles but not for research roles compared to hospitals more likely to be managed by doctors. Teaching hospitals are no different from other hospitals in their adoption rate.
One interpretation of these patterns is that hospitals with an integrated salary model, and hence professional managers, have leaders that recognize the clinical and administrative benefits of AI, while other hospitals might have leaders that do not recognize the benefits. However, we have seen that there are several reasons why AI adoption might be slow in hospitals. In other words, even if professional managers are more likely to adopt AI, they are not necessarily right to engage in adoption at this stage. For example, while it may be that doctor-led hospitals have not adopted AI because they view it as a threat to their jobs, it may also be that doctor-led hospitals have leaders who have a better grasp of the other adoption challenges—algorithmic limitations, data access limitations, and regulatory barriers.
Policy implications
AI has received a great deal of attention for its potential in health care. At the same time, adoption has been slow compared to other industries, for reasons we have described: regulatory barriers, challenges in data collection, lack of trust in the algorithms, and a misalignment of incentives. Before discussing potential policy solutions to each of these, it is important to acknowledge that this may not be due to a market failure. AI adoption may be slow because it is not yet useful, or because it may not end up being as useful as we hope. While our view is that AI has great potential in health care, it is still an open question.
The regulatory barriers have the most direct policy implications. Innovation is needed in the approval process so that device makers and software developers have a well-established path to commercialization. Innovation is needed to enable data sharing without threatening patient privacy. Perhaps least controversially, clear rules on who is liable if something goes wrong would likely increase adoption.22 If we believe AI adoption will improve health care productivity, then reducing these regulatory barriers will have value.
The policy implications related to challenges in data collection and the lack of trust in algorithms are more related to continued funding of research than new regulation. Governments and nonprofits are already directing substantial research funds to these questions, particularly around lack of trust. In terms of misaligned incentives, complementary innovation in management processes is difficult to achieve through policy. Antitrust policy to ensure competition could help, as competition has been shown to improve management quality. Otherwise, there are few policy tools that could change these incentives.23
Overall, relative to the level of hype, AI adoption has been slow in health care. Policymakers can help generate useful adoption with some innovative approaches to privacy and the path to regulatory approval. However, it might be the familiar tools that are most useful: clarify the rules, fund research, and enable competition.
Avi Goldfarb is a consultant with Goldfarb Analytics Corporation, which advises organizations on digital and AI strategy. The authors did not receive financial support from any firm or person for this article or from any firm or person with a financial or political interest in this article. Other than the aforementioned, the authors are not currently an officer, director, or board member of any organization with a financial or political interest in this article.
-
Footnotes
- Brynjolfsson, Erik, Tom Mitchell and Daniel Rock. “What Can Machines Learn, and What Does It Mean for Occupations and the Economy?” AEA Papers and Proceedings 108 (2018): 43-47. doi.org/ 10.1257/pandp.20181019.
- Agrawal, Ajay, Joshua Gans and Avi Goldfarb. Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press, Boston MA (2018); Goldfarb, Avi, Bledi Taska and Florenta Teodoridis. “Could Machine Learning be a General Purpose Technology? A Comparison of Emerging Technologies Using Data from Online Job Postings.” NBER working paper #29767 (2022).
- United States Government Accountability Office. “Artificial Intelligence in Health Care. November 2020. https://www.gao.gov/assets/gao-21-7sp.pdf; World Health Organization. “WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use.” June 28, 2021. https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and-six-guiding-principles-for-its-design-and-use; Spatharou, Angela and Solveigh Hieronimus. “Transforming healthcare with AI: The impact on the workforce and organizations.” McKinsey & Co., March 10, 2020. https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/transforming-healthcare-with-ai.
- Nunn, Ryan, Jana Parsons and Jay Shambaugh. “A dozen facts about the economics of the US health-care system.” Brookings Institution, March 10, 2020. https://www.brookings.edu/research/a-dozen-facts-about-the-economics-of-the-u-s-health-care-system/.
- Tambe, Sony and Lorin Hitt. “Now IT’s Personal: Offshoring and the Shifting Skill Composition of the U.S. Information Technology Workforce.” Management Science 58 (2012): 678-695. https://doi.org/10.1287/mnsc.1110.1445.
- Goldfarb, Avi, Bledi Taska and Florenta Teodoridis. “Artificial Intelligence in Health Care? Evidence from Online Job Postings.” AEA Papers and Proceedings 110 (2020): 400-4004. doi.org/ 10.1257/pandp.20201006.
- Burning Glass Technologies is an analytics software company that provides access to the near-universe of jobs that were posted in the United States since 2010. https://www.burning-glass.com/.
- We distinguish between hospitals and social assistance establishments by separately evaluating establishments belonging to the American Hospital Association (AHA) – hospitals – and those that are categorized under NAICS industry code 62 “Health Care and Social Assistance” but not associated with AHA – social assistance.
- Bresnahan, Timothy and Shane Greenstein. “Technical Progress and Co-invention in Computing and in the Uses of Computers.” Brookings Papers on Economic Activity, Microeconomics (1996): 1-83.
- Dranove, David, Chris Forman, Avi Goldfarb and Shane Greenstein. “The Trillion Dollar Conundrum: Complementarities and Health Information Technology.” American Economic Journal: Economic Policy 6, no. 4 (2014): 239-270. doi.org/ 10.1257/pol.6.4.239.
- Aral, Sinan, Erik Brynjolfsson and Lynn Wu. “Three-Way Complementarities: Performance Pay, Human Resource Analytics, and Information Technology.” Management Science 58, no. 5 (2013): 913-931. https://www.jstor.org/stable/41499529.
- Forman, Chris and Anne Gron. “Vertical Integration and Information Technology Investment in the Insurance Industry.” The Journal of Law, Economics, and Organization 27, no. 1 (2011): 180-218. https://doi.org/10.1093/jleo/ewp023.
- Dranove, David, Chris Forman, Avi Goldfarb and Shane Greenstein. “The Trillion Dollar Conundrum: Complementarities and Health Information Technology.” American Economic Journal: Economic Policy 6, no. 4 (2014): 239-270. doi.org/ 10.1257/pol.6.4.239; Forman, Chris, Avi Goldfarb and Shane Greenstein. “Understanding the Inputs into Innovation: Do Cities Substitute for Internal Firm Resources?” Journal of Economics & Management Strategy 17, no. 2 (2008): 295-316. https://doi.org/10.1111/j.1530-9134.2008.00179.x.
- Bembeneck, Emily, Rebecca Nissan and Ziad Obermeyer. “To stop algorithmic bias, we first have to define it.” Brookings Institution, October 21, 2021. https://www.brookings.edu/research/to-stop-algorithmic-bias-we-first-have-to-define-it/ .
- Crawford, Kate, Roel Dobbe and Theodora Dryer, et al. “2019 Report.” AI Now, December 2019. https://ainowinstitute.org/AI_Now_2019_Report.pdf.
- Stern, Ariel Dora and W Nicholson Price, II. “Regulatory Oversight, Causal Inference, and Safe and Effective Health Care Machine Learning.” Biostatistics 21, no. 2 (2020): 363-367. https://doi.org/10.1093/biostatistics/kxz044.
- Topol, Eric. Deep Medicine. Basic Books, New York (2019).
- Reisman, Miriam. “EHRs: The Challenge of Making Electronic Data Usable and Interoperable.” Pharmacy and Therapeutics 42(9) (September 2017): 572-575. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5565131/.
- Miller, Amalia R. and Catherine E. Tucker. “Can Health Care Information Technology Save Babies?” Journal of Political Economy 119, no. 2 (2011): 289-324. https://doi.org/10.1086/660083.
- Galasso, Alberto and Hong Luo. “Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence.” In Ajay Agrawal, Joshua Gans and Avi Goldfarb Eds. The Economics of Artificial Intelligence Chapter 20 (2019): 493-506.
- YouTube. “Geoff Hinton: On Radiology.” November 24, 2016. https://www.youtube.com/watch?v=2HMPRXstSvQ.
- Galasso, Alberto and Hong Luo. “Punishing Robots: Issues in the Economics of Tort Liability and Innovation in Artificial Intelligence.” In Ajay Agrawal, Joshua Gans and Avi Goldfarb Eds. The Economics of Artificial Intelligence Chapter 20 (2019): 493-506.
- Bloom, Nicholas, Carol Propper, Stephan Seiler and John Van Reenen. “The Impact of Competition on Management Quality: Evidence from Public Hospitals.” The Review of Economic Studies 82, no. 2. (April 2015): 457-489. https://doi.org/10.1093/restud/rdu045.