Republicans Push Back Against HHS AI Assurance Labs, Citing Innovation and Regulatory Concerns
In a move that could reshape the future of artificial intelligence (AI) in healthcare, several Republican members of Congress are urging the U.S. Department of Health and Human Services (HHS) to reconsider its plans for government-administered AI assurance labs. Instead, they propose a model that collaborates with private industry, citing fears of regulatory overreach and its potential to stifle innovation.
Representatives Dan Crenshaw (R-Texas), Brett Guthrie (R-Kentucky), Jay Obernolte (R-California), and Dr. Mariannette Miller-Meeks (R-Iowa) expressed their concerns in a letter addressed to Micky Tripathi, the acting chief AI officer at HHS. The letter, which was sent as part of a broader discussion on AI regulation, highlights the lawmakers’ apprehensions about the potential consequences of the proposed assurance labs.
Why the Debate Matters
As the U.S. prepares for a potential shift in administration in 2025, with deregulation being a key priority for the incoming Trump administration, the future of AI in healthcare hangs in the balance. The Republican lawmakers argue that the current direction of HHS could lead to regulatory capture, where large corporations dominate the market, leaving smaller innovators at a disadvantage.
The letter to Tripathi, who also serves as Assistant Secretary for Technology Policy and National Coordinator for Health IT, seeks clarity on the objectives of HHS’s ongoing reorganization. This restructuring effort, announced in July, includes the transformation of the Office of the National Coordinator for Health Information Technology (ONC) into the new Assistant Secretary for Technology Policy (ASTP). The ASTP has been tasked with expanded responsibilities, including oversight of healthcare AI, and has received additional funding and staff to support its mission.
Concerns Over Assurance Labs
One of the primary issues raised in the letter is the creation of fee-based assurance labs. These labs, as envisioned by HHS, would supplement the U.S. Food and Drug Administration’s (FDA) review of AI tools. However, the lawmakers argue that this approach could lead to conflicts of interest, particularly if the labs are composed of companies that compete in the same market.
“We are particularly troubled by the possible creation of fee-based assurance labs which would be comprised of companies that compete,” the representatives wrote. They warned that such a system could give larger, established tech companies an unfair advantage, potentially stifling innovation and harming smaller players in the industry.
The letter also questions the statutory authority of the ASTP/ONC in creating these labs and its role in the broader healthcare system. The lawmakers included a list of eleven questions and requested detailed responses by December 20. As of now, a spokesperson for ASTP has stated that the agency is unable to comment on the letter, and the Coalition for Health AI (CHAI) has not provided a response.
Broader Implications and Historical Context
This is not the first time concerns have been raised about the regulation of AI in healthcare. Representative Miller-Meeks previously questioned the FDA’s then-director of the Center for Devices and Radiological Health about CHAI and its members. During a House Energy and Commerce Health Subcommittee hearing, she expressed skepticism about the coalition’s role, particularly given its ties to major tech companies like Google and Microsoft, which are founding members of CHAI. She also pointed out that the Mayo Clinic, which employs some of CHAI’s leaders and has over 200 AI deployments, could further complicate the issue.
“It does not pass the smell test,” Miller-Meeks said during the hearing, suggesting that the situation showed “clear signs of attempt at regulatory capture.”
CHAI has been actively working to establish standards for healthcare AI transparency, aligning its efforts with the ASTP’s requirements for certifying health IT. The coalition has also announced plans to release an “AI nutrition label” to improve transparency and accountability in AI tools used in healthcare.
The Role of Assurance Labs
Dr. John Halamka, president of the Mayo Clinic Platform, has been a vocal advocate for the potential benefits of AI in healthcare, while also acknowledging its risks. Speaking at HIMSS24 earlier this year, he discussed the importance of assurance labs in identifying and mitigating bias in AI algorithms.
“Mayo has an assurance lab, and we test commercial algorithms and self-developed algorithms,” Halamka said. “And what you do is you identify the bias and then you mitigate it. It can be mitigated by returning the algorithm to different kinds of data, or just an understanding that the algorithm can’t be completely fair for all patients. You just have to be exceedingly careful where and how you use it.”
Since its founding in 2021, CHAI has focused on delivering AI transparency and addressing algorithmic bias in healthcare. The coalition has worked to create guidelines and guardrails that account for government concerns, building on frameworks like the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. These efforts align with President Joe Biden’s executive order on AI, which directs HHS to establish a safety program for AI in healthcare.
Key Questions Raised
The lawmakers’ letter to HHS includes several pointed questions, reflecting their concerns about the potential impact of assurance labs on the healthcare industry. Among the issues they seek clarity on are:
- The specific objectives of the ASTP’s reorganization and its implications for healthcare AI.
- The potential for conflicts of interest in fee-based assurance labs.
- The statutory authority of the ASTP/ONC in creating these labs.
- The role of major tech companies like Google and Microsoft in shaping AI regulation.
These questions highlight the complexity of balancing innovation with regulation in a rapidly evolving field like AI.
Looking Ahead
As the debate over AI assurance labs continues, the stakes are high for the future of healthcare innovation. While proponents argue that assurance labs are essential for ensuring the safety and effectiveness of AI tools, critics warn that they could lead to regulatory capture and stifle competition.
The ongoing dialogue underscores the need for a careful and balanced approach to AI regulation, one that fosters innovation while protecting patients and ensuring transparency. With responses from HHS expected by December 20, the coming weeks could provide crucial insights into the future direction of AI in healthcare.
As the four Republican lawmakers stated in their letter, “The ongoing dialogue around AI in healthcare must consider the distinct authorities and duties of various agencies and offices to prevent overlapping responsibilities, which can lead to confusion among regulated entities.”
The outcome of this debate will not only shape the regulatory landscape for AI in healthcare but also set a precedent for how emerging technologies are governed in the years to come.
Originally Written by: Andrea Fox