Site logo

Hospitals, tech companies confront AI governance

With its potential to lower costs, alleviate physician burnout, and enhance patient outcomes, AI has captured the attention of hospitals, developers, and policymakers alike. Yet, as this transformative technology permeates the healthcare ecosystem, it brings with it a host of challenges and uncertainties, among them being accuracy, bias, and regulatory oversight.

Challenges of AI Transparency

As hospitals and developers dive deeper into the realm of AI, they are confronted with a paradox: while AI holds the promise of unlocking unprecedented insights and efficiencies, its inner workings often remain a mystery. Heather Lane, a senior architect at Athenahealth, describes the middle ground of AI algorithms as a “black box,” where the intricacies of decision-making remain elusive. This poses a significant challenge for oversight and accountability, especially as AI algorithms grow increasingly complex, outpacing traditional governance structures.

Government Intervention

Recognizing the need for regulatory guidance, policymakers in Washington have initiated efforts to develop a comprehensive strategy for overseeing AI in healthcare. The creation of a new task force within the Department of Health and Human Services (HHS) marks a crucial step toward addressing these concerns. However, as discussions unfold, a delicate balance must be struck between innovation and safeguarding against potential risks. The convergence of interests between healthcare stakeholders and government regulators underscores the urgency of finding common ground—a regulatory framework that promotes responsible AI adoption without stifling progress.

At the forefront of this technological revolution are giants like Google and Microsoft, who have released a variety of AI tools tailored for healthcare applications. Partnering with major hospital chains and electronic health record (EHR) vendors, these tech titans are reshaping the landscape of medical practice. From predictive analytics to generative AI, hospitals now have an array of options at their disposal, each promising to enhance efficiency and quality of care.

Generative AI, in particular, has captured the attention of healthcare innovators since the launch of ChatGPT by OpenAI. This human-like chatbot, powered by GPT technology, represents a significant shift in how we interact with AI. While speculations regarding the potential for AI to supplant physicians are present, current applications remain focused on low-risk, high-reward use cases in administrative tasks. Institutions like Stanford Health Care and Vanderbilt University Medical Center are leveraging generative AI to streamline workflows and improve communication between clinicians and patients.

AI and Ethical Concerns

However, AI’s inherent limitations, including model drift and bias, pose significant challenges for healthcare organizations striving to uphold ethical standards. Bias, in particular, emerges as a pressing issue in an industry grappling with historical disparities in care delivery. If left unchecked, biased algorithms risk perpetuating inequalities and exacerbating existing disparities.

To mitigate these risks, hospitals and EHR vendors have implemented rigorous internal controls and validation processes. Meditech and Epic, among others, subject their AI models to extensive testing and monitoring to ensure reliability and accuracy. Additionally, governance committees and oversight structures have been established to scrutinize AI deployments and ensure adherence to ethical standards. Institutions like Highmark Health and Providence are leading the charge in developing comprehensive frameworks for evaluating AI applications and monitoring their impact on patient care.

The rapid evolution of AI technology poses formidable challenges for existing governance mechanisms. Traditional approaches to oversight may prove inadequate in the face of increasingly complex AI models. Evaluating the performance of generative AI, in particular, presents unique challenges, as the concept of “ground truth” becomes hard to understand in real-world clinical settings. As Michael Pencina of Duke Health observes, the notion of explainability becomes tenuous in the context of AI, where decisions are made based on intricate patterns and probabilistic reasoning.

The Road Ahead

While private sector initiatives aim to set standards and best practices, the need for government intervention remains a top concern. As AI applications venture into higher-risk domains such as diagnostics, regulators must step in to ensure patient safety and uphold ethical standards. 

In navigating this uncharted territory, one thing remains clear: the need for collaboration and dialogue between industry, government, and academia is essential. By fostering collaboration and embracing transparency, we can navigate these challenges and unlock the full potential of AI to revolutionize patient care and improve outcomes for all.