Category:
AI in Healthcare: 6 Steps to Maximize Benefits and Minimize Risks

AI in Healthcare: 6 Steps to Maximize Benefits and Minimize Risks

AI in Healthcare: Balancing Innovation with Safety

The rapid rise of artificial intelligence (AI) in healthcare is transforming the way medical professionals diagnose, treat, and manage patient care. From AI-powered diagnostic tools to robotic surgical systems, the integration of AI into healthcare has brought about groundbreaking advancements. However, as with any technological revolution, unintended consequences are inevitable. While many of these outcomes may be positive, some could pose risks to patient safety and system reliability.

To navigate this complex landscape, healthcare organizations and AI developers must work together to ensure that AI systems are robust, reliable, and transparent. This collaborative effort is essential to maximize the benefits of AI while minimizing potential harm. Two prominent researchers, Dean Sittig, PhD, from the University of Texas, and Hardeep Singh, MD, MPH, from Baylor College of Medicine, have outlined a roadmap for achieving this balance in an opinion piece published on November 27 in JAMA.

In their article, Sittig and Singh emphasize the importance of proactive measures to ensure AI safety and effectiveness. They argue that healthcare organizations must develop comprehensive AI safety assurance programs, monitor AI use, and engage both clinicians and patients in the process. “Monitoring risks is crucial to maintaining system integrity, prioritizing patient safety, and ensuring data security,” they write.

Here are six key recommendations from their paper that healthcare providers and AI developers should consider:

1. Conduct Real-World Clinical Evaluations

Before implementing any AI-enabled systems into routine care, healthcare organizations should either conduct or wait for real-world clinical evaluations published in reputable medical journals. Sittig and Singh stress the importance of independent testing and monitoring using local data to minimize risks to patient safety.

“Iterative assessments should accompany this risk-based testing to ensure that AI-enabled applications are benefiting patients and clinicians, are financially sustainable over their life cycles, and meet core ethical principles.”

By conducting these evaluations, healthcare organizations can ensure that AI systems are not only effective but also aligned with ethical standards and long-term sustainability goals.

2. Establish AI Governance Committees

To oversee the implementation and monitoring of AI systems, healthcare organizations should invite AI experts to join new or existing governance and safety committees. These experts could include data scientists, informaticists, operational AI personnel, human-factors specialists, or clinicians experienced in working with AI.

“All committee members should meet regularly to review requests for new AI applications, consider the evidence for safety and effectiveness before implementation, and create processes to proactively monitor the performance of AI-enabled applications they plan to use.”

Such committees can serve as a critical checkpoint to ensure that AI systems are safe, effective, and beneficial to both patients and clinicians.

3. Maintain an Inventory of AI Systems

Healthcare organizations should keep a detailed inventory of all clinically deployed AI-enabled systems. This inventory should include comprehensive tracking information, such as the AI version in use, the date and time of system use, patient and clinical user IDs, input data, and the AI’s recommendations or outputs.

“The committee should oversee ongoing testing of AI applications in the live production system to ensure the safe performance and safe use of these programs.”

Regularly reviewing this inventory can help organizations identify and address potential issues before they escalate.

4. Provide Training for Clinicians

To ensure that clinicians are well-prepared to use AI systems, healthcare organizations should develop high-quality training programs. These programs should include a formal consent-style process, complete with signatures, to confirm that clinicians understand the risks and benefits of using AI tools.

“Take steps to ensure that patients understand when and where AI-enabled systems were developed, how they may be used, and the role of clinicians in reviewing the AI system’s output before giving their consent.”

By educating both clinicians and patients, healthcare organizations can foster trust and transparency in the use of AI technologies.

5. Create a Reporting Process for AI-Related Issues

Healthcare organizations should establish a clear process for reporting AI-related safety issues. This process should involve a multidisciplinary approach to analyze and mitigate risks effectively.

“Healthcare organizations should also participate in national postmarketing surveillance systems that aggregate deidentified safety data for analysis and reporting.”

By participating in these surveillance systems, organizations can contribute to a broader understanding of AI safety and help improve industry-wide practices.

6. Develop Emergency Protocols for AI Malfunctions

In the event of an urgent malfunction, healthcare organizations must have clear written instructions and authority to disable or stop AI-enabled systems 24/7. This is similar to the preparation required for periods of electronic health record (EHR) downtime.

“Regularly assess how [your] AI systems affect patient outcomes, clinician workflows, and system-wide quality.”

If an AI system fails to meet its pre-implementation goals, it should be revised or, if necessary, decommissioned entirely. This ensures that only reliable and effective systems remain in use.

As AI continues to revolutionize healthcare, the need for robust safety measures and transparent practices becomes increasingly critical. By following these six recommendations, healthcare organizations can harness the power of AI while safeguarding patient safety and system integrity.

For more details, you can read the full paper here.

Original source article rewritten by our AI can be read here.
Originally Written by: Melanie Evans

Share

Related

Popular

bytefeed

By clicking “Accept”, you agree to the use of cookies on your device in accordance with our Privacy and Cookie policies