This article was previously published in Health Affairs.
The increasing consumption of and self-reliance on informal information sources, particularly the internet, by patients has long been a well noted trend in the health care system. However, with the emergence of generative artificial intelligence (AI), this dependence has not only been heightened but also rapidly extended to physicians and other health care providers.
While earlier AI models were largely limited to analyzing and interpreting existing data, generative AI systems are capable of creating new content. This content creation capability, coupled with the ease of use and accessibility provided through user-friendly interfaces, has led to a surge in its adoption and use by many professionals, including health care providers. The overreliance on digital information sources traditionally stemmed from patients seeking to better understand their conditions. Now, with generative AI, health care providers might also lean heavily on AI-assisted decision-making.
While the application of generative AI in health care has yielded promising results, it is crucial to recognize that this technology is not a panacea. It cannot be universally applied to solve all problems in every health care setting. Physicians and health care providers must deploy generative AI discerningly to mitigate unintended consequences; responsible use is key to harnessing its benefits while avoiding adverse outcomes.
Generative AI performs optimally in environments characterized by high repetition and low risk. This effectiveness stems from the technology’s reliance on historical data to identify patterns and make predictions, under the premise that future conditions will mirror those of the past. Utilizing such technology in low-risk situations, particularly where errors carry minor consequences, is prudent. This cautious approach offers several advantages: It enables health care providers and, more importantly, patients to gradually comprehend the AI’s capabilities and establish trust in its utility. Additionally, it affords AI developers valuable opportunities to rigorously test and refine their systems in a controlled environment before deployment in higher-stakes scenarios.
Potential health functions for generative AI
With this context, we can evaluate the suitability of generative AI within various health care activities.
Routine information gathering
Generative AI can enhance the efficiency of information collection and reporting by engaging with patients in understandable language, resolving uncertainties, and summarizing data for health care providers. An AI system can assist health care providers with collecting the medical histories of their patients by posing specific questions in a conversational manner. An additional advantage of AI is its ability to tap into health information exchanges (HIEs) to retrieve patient medical records, analyze them, and formulate pertinent inquiries based on the patient’s medical background. For example, by cross-referencing a patient’s medication list and current health complaints, AI can verify whether patients are adhering to their prescribed regimens or have discontinued any conflicting medications in light of new prescriptions. This process aids in assembling a more comprehensive medical history for the patient, which can then be used by the physicians to provide better care.
Moreover, patients who are already accustomed to AI applications in various settings may find it easier to adapt to and trust similar AI technologies in health care. The tasks these AI systems perform tend to be repetitive and carry a relatively low risk, which aligns well with the capabilities of current generative AI ]technologies. Such systems are adept at handling these processes and can perform at a level that is generally considered satisfactory within this domain.
AI has shown potential in enhancing diagnostic procedures, especially for conditions with substantial data availability. Nevertheless, achieving accurate diagnoses and mitigating biases remain challenges, particularly for less common diseases with limited data representation. The effectiveness of AI in diagnosing rare diseases is hindered by this scarcity of data, which means the AI might not perform as well due to the insufficient learning sample. Even for common conditions, where ample data exists, it’s crucial that AI systems have access to comprehensive datasets, both to improve their performance and—as addressed below—to avoid the development of a balkanized AI landscape where big health systems with access to large amounts of proprietary data widen their advantages over their smaller counterparts. Currently available generative AI technologies, such as ChatGPT, are trained on publicly available data only. Without incorporating the rich medical histories collected from extensive efforts to digitize health care records, reliance on generic AI models for medical diagnostics would be premature. Therefore, health care providers should exercise caution in deploying generative AI for diagnostics until they can train the AI on extensive medical datasets.
Even when health care providers have trained their AI systems on sufficiently large medical datasets, it is important that they mitigate the potential risks. They should design specific workflows where AI supports, rather than replaces, physicians in the diagnostic process—where AI acts as a valuable assistant rather than a substitute.
While AI may have potential applications in the diagnostic process, its use in treatment raises significant challenges, particularly due to accountability and liability concerns, issues with patients’ trust and acceptance, and technological and practical limitations. Health care providers bear the ultimate responsibility for the treatments they administer. In malpractice cases, it is the providers who must justify their decisions. Altering the existing legal framework to shift treatment responsibility to AI developers seems improbable, and it would likely pose too great a risk for AI developers to assume liability for malpractice. Furthermore, patient trust in AI-managed treatments has not yet reached a level that would support widespread implementation.
AI currently lacks the advanced technological capability to replicate the nuanced tasks physicians perform beyond simple medication management. Treatments are often highly individualized, which does not align with AI’s strengths in high-repetition, low-risk tasks. Given these complexities, the integration of AI into medical treatment processes appears unlikely in the near future.
Post-Treatment Monitoring And Follow-Up
This area holds considerable promise for AI deployment, driven by two main factors. First, while patient adherence to post-treatment advice is crucial, medical providers have limited means to ensure compliance. Non-adherence can diminish treatment effectiveness, negatively affecting patient health and potentially resulting in financial repercussions for providers. Second, the proliferation of wearable technology, smart devices, and smartphones equipped with an array of sensors offers an unprecedented opportunity to monitor patient behavior outside clinical settings. AI can leverage this data to provide real-time monitoring and personalized recommendations and interventions. With access to such extensive data, AI can also enable medical providers to proactively address patient health deterioration by alerting providers when immediate medical attention is necessary.
Population health management
Leveraging extensive datasets from electronic health records (EHRs) and HIEs, medical providers can significantly improve the management of patient populations. This can be done even more effectively through the integration of predictive analytics, utilizing AI to identify the most at-risk patients who would substantially benefit from timely medical interventions. For instance, AI algorithms can be trained to assess the likelihood of hospital readmissions post-discharge by examining a set of patient characteristics. Following these predictions, customized care plans can be formulated with direct human involvement to ensure that such patients receive necessary support to prevent further serious health events.
Implementing these AI applications may appear straightforward on the surface. However, it is critical to acknowledge that their effectiveness hinges on the availability of substantial and diverse datasets. Information beyond what is traditionally captured in EHRs and HIEs, such as patients’ social determinants, lifestyle choices, and daily activities, plays a crucial role in their health outcomes. Unfortunately, there is often a lack of systematically compiled data in these areas, which can lead to suboptimal performance of current predictive models.
To enhance the performance of predictive AI models for population health management purposes, it is important that AI systems access and analyze considerably larger and more varied datasets. This could be feasibly achieved through the integration of information gathered from wearable technologies and smart devices. Such devices can continuously monitor and record a wealth of health-related data, offering a more comprehensive view of a patient’s health profile. Incorporating this data could lead to more accurate predictions and, consequently, more effective intervention strategies, paving the way for a more proactive and personalized approach to health care.
To optimize the deployment of AI in health care environments, it is paramount to foster a climate of transparency among AI developers and facilitate a synergistic relationship between health care professionals and technology experts. This collaboration is essential to ensure that the recommendations made by AI are both medically sound and meticulously scrutinized for accuracy, minimizing the potential for errors that may stem from defective data inputs or biased algorithms.
Furthermore, there is a profound need for openness in patient communications. Patients must be thoroughly informed about the role AI plays in their health care journey. It is equally vital that they understand the privacy implications inherent in their consent to use AI-driven tools, particularly when data collection extends beyond traditional medical records to include information sourced from wearable devices and smart technology.
The imperative to educate patients on the utilization of their data, the privacy safeguards in place, and the nuanced benefits and risks associated with AI in health care is a pivotal aspect of enabling informed decision-making. This education goes beyond fulfilling legal requirements; it serves as a foundational element in fortifying the trust between patients and the evolving health care system amidst its technological transformation.
Break data monopolies with HIEs
Addressing the potential exacerbation of existing monopolies within the health care market is, perhaps, one of the most pressing concerns in this digital transition. As AI systems depend on substantial volumes of high-quality data for optimal performance, larger medical providers with extensive market share, and consequently more data, may strengthen their positions, inadvertently leading to increased health care costs. This scenario places smaller, independent providers at a competitive disadvantage, unable to leverage AI to the same extent in enhancing health care delivery. Such a disparity could widen the gap in care quality and further disadvantage underserved communities.
To mitigate this, it is crucial for industry leaders, regulatory bodies, and health care consortia to spearhead initiatives that democratize access to medical data for AI development. HIEs could be instrumental in this endeavor: They could serve as aggregators and integrators of data from a multitude of providers. By centralizing such data, HIEs could facilitate the deployment of AI systems capable of learning from vast and diverse medical records.
More importantly, HIEs could offer AI as a shared service to their affiliates, ensuring that all member entities, regardless of size, can benefit from insights drawn from larger datasets. Such a collaborative approach could help level the playing field, allowing smaller providers to enhance their service quality through AI. This would contribute to a more equitable health care landscape where technology serves as a bridge rather than a barrier.
Acknowledgements and disclosures
Niam Yaraghi regularly consults with various health information exchange organizations.