How AI is Reshaping Healthcare: Insights from Experts
Editor’s note: This article includes insights from Healthcare Dive’s recent live event, “AI and the Future of Healthcare.” You can watch the full event here.
Artificial intelligence (AI) is often hailed as the next big thing in healthcare, promising to revolutionize the industry by automating repetitive tasks, reducing medical costs, and allowing clinicians to spend more time with patients. However, the road to widespread AI adoption in healthcare is riddled with challenges. From patient skepticism to ethical concerns like bias, healthcare organizations must navigate a complex landscape to implement AI effectively.
More than two years after generative AI tools like ChatGPT captured the public’s imagination, the healthcare sector is still grappling with how to regulate, test, and deploy these technologies. During a panel discussion hosted by Healthcare Dive on November 19, eight experts shared their insights on how healthcare organizations can successfully integrate AI while addressing its challenges.
Step 1: Assessing the Clinical Setting
Before diving into AI adoption, healthcare providers must evaluate the clinical setting where the technology will be used. According to Sonya Makhni, medical director for the Mayo Clinic Platform, not all AI tools are suitable for every clinical task.
“An AI algorithm that might be really good and appropriate for my clinical setting might not be as appropriate for another and vice versa,” Makhni explained. “Health systems need to understand what to look for, and they need to understand their patient population so they can make an informed decision on their own.”
However, analyzing AI tools can be daunting due to their complexity. Makhni noted that healthcare professionals are already stretched thin and cannot be expected to become experts in data science or AI to interpret these solutions effectively.
To address this, Makhni recommends turning to public and private consortia, such as the nonprofit Coalition for Health AI, which provides guiding principles for evaluating AI tools. These principles include safety, fairness, usefulness, transparency, explainability, and privacy.
Step 2: Addressing Patient Concerns
Even if providers are ready to adopt AI, they must also consider their patients’ comfort levels. A 2022 Pew Research survey revealed that 60% of American adults would feel uncomfortable if their healthcare provider relied on AI for medical care.
To ease these concerns, Maulin Shah, chief medical information officer at Providence, emphasized that AI currently plays a supportive role in healthcare. “AI is really just, in a lot of ways, a better way of supporting and providing decision support to your doctor, so that they aren’t missing things or so they can be suggesting things,” Shah said.
Patients may also find reassurance in knowing that AI has been used in healthcare for years. Aarti Ravikumar, chief medical information officer at Atlantic Health System, highlighted the artificial pancreas, a closed-loop hybrid insulin pump, as a transformative AI-driven tool for insulin-dependent patients.
“All of that work is being done using artificial intelligence algorithms,” Ravikumar said. “We have AI tools that are embedded within our medical devices or within our electronic medical record, and have for a long time.”
Ravikumar added that these tools do not replace clinicians in the decision-making process. “If we get to the stage that it’s going to automate decisions and remove the clinician from that decision-making process, I think then we’ll have to definitely explain a lot more,” she said.
Step 3: Tackling Bias and Errors
One of the most significant challenges in implementing AI in healthcare is addressing bias and errors. In healthcare, the stakes are high—bias or “hallucinations” (when AI generates false or misleading information) can disrupt patient care.
Bias is not a new issue in healthcare, and AI could either exacerbate or help mitigate it. Jess Lamb, a partner at McKinsey, pointed out that the healthcare system already has inherent biases. “There is a ton of bias in the healthcare system before we introduce AI, right? And so we have to remember we are not starting from a perfect place,” Lamb said.
She added, “The idea that we can actually use AI and use some of this deliberate monitoring to actually improve some of that in-going position that we’re in when it comes to bias in healthcare, I think is actually a huge opportunity.”
To minimize errors and bias, Aashima Gupta, global director of healthcare at Google Cloud, stressed the importance of keeping humans in the loop. Feedback from experts, nurses, and clinicians can make generative AI more effective for specific use cases. At Google, dedicated teams rigorously test AI models by attempting to “break” them, ensuring robust development and error management.
Step 4: Navigating Regulations
While healthcare organizations work on implementing AI, the federal government and private consortia are still developing regulations for these tools. Micky Tripathi, assistant secretary for technology policy and acting chief AI officer at the Department of Health and Human Services (HHS), noted that AI adoption has outpaced regulatory efforts, creating pressure for the government to act quickly.
Tripathi emphasized the importance of public-private partnerships in shaping AI regulations. “There is a maturation process that’s going to go on here that I think is very much going to be a public, private thing,” he said.
He also raised questions about how regulations might compel private companies to adopt standards and certifications for AI tools. For example, the government already provides standards for electronic health record companies to apply for voluntary certifications. Similar frameworks could be developed for AI models.
Step 5: Establishing Open Standards and Training
According to Sara Vaezy, chief strategy and digital officer at Providence, open standards are crucial for addressing clinical AI use cases at the ground level. “We need open standards similar to all of the progress that has been made around interoperability,” Vaezy said.
She added that the gap between high-level consortia frameworks and on-the-ground implementation needs to be closed quickly. Open standards could help bridge this divide.
Training healthcare providers is another essential step in ensuring the safe and effective use of AI. Reid Blackman, founder and CEO of consultancy Virtue, argued that educating doctors, nurses, and other healthcare professionals about AI risks can help fill gaps in regulation and governance.
“Training is an essential part of, I don’t want to say guardrails, but it’s an essential part of making sure things don’t go sideways,” Blackman said.
Key Takeaways for Healthcare Organizations
- Evaluate the clinical setting before adopting AI tools to ensure they are appropriate for the task.
- Address patient concerns by emphasizing AI’s supportive role and its long-standing presence in healthcare.
- Mitigate bias and errors by keeping humans in the loop and rigorously testing AI models.
- Collaborate with public and private entities to develop regulations and standards for AI.
- Invest in training healthcare providers to understand and manage AI risks effectively.
As AI continues to evolve, healthcare organizations must strike a balance between innovation and caution. By addressing these challenges head-on, the industry can unlock AI’s potential to transform patient care while minimizing risks.
Originally Written by: Samantha Liss