Not having policies in place to regulate artificial intelligence (AI) and machine learning (ML) could have dire consequences across every sector of the health care industry.
That was the point made by Brian Scarpelli and Sebastian Holst during their presentation titled “A modest proposal for AI regulation in healthcare,” held during the HIMSS22 Global Health Conference in Orlando. Scarpelli is the senior global policy counsel for the Connected Health Initiative and Holst is principal with Qi-fense, a consulting group that works in AI and ML.
“ML properties do more than challenge domain-specific applications of technology,” Scarpelli and Holst write. “Many of these properties will force an evaluation and retooling of core manufacturing, quality, and risk frameworks that have effectively served as the foundation of today’s industry-specific regulations and policies.”
Here’s some key points from their presentation on the growth of AI/ML and the need for regulation.
AI can potentially revolutionize health care in all facets. It can reduce administrative burdens for providers and payer and allow for resources to be deployed within a health system to serve vulnerable patient populations. It can manage public health emergencies such as the COVID-19 pandemic and help improve both preventive care and diagnostic efficiency.
According to Scarpelli and Holst, the growth in machine learning products has surged since 2015, starting first with processing applications, including products for processing radiological images, and has since progressed into diagnosis applications, particularly also in the radiological space to assist with triage and prioritization.
The number of patents coded to machine learning and health informatics has exploded, from 165 in 2017 to more than 1,100 in 2021.
While AI is promising, there are potential legal and ethical challenges that must be addressed. For example, one of the major themes of the HIMSS22 conference has been the challenge of achieving health equity and eliminating implicit bias. That’s one of the major challenges of AI as well since AI solutions can be biased. Many sessions focused on how diverse teams are needed when creating AI solutions to ensure that the programs don’t carry the same biases as society, which could exacerbate current social problems, according to Tania M. Martin-Mercado, MS, MPH, a clinical researcher who presented on “How implicit bias affects AI in healthcare.”
During her presentation, she pointed to an example of “an online tool that estimates risk of breast cancer calculates a lower risk for Black or Latinx women than White even when every other risk factor is identical.”
A diverse group of health agencies, including the FDA, HHS, CMS, FTC, and the World Health Organization, are developing regulations and asking from guidance from various stakeholders, including AI developers, physicians and other providers, patients, medical societies, and academic institutions.
Scarpelli says that the vision for successful AI follows four principals. It should:
This article originally appeared on the website MedicalEconomics.com