Once thought of as a timepoint in the distant future, healthcare has already entered the artificial intelligence (AI) era and it will have a transformative impact across the spectrum of medical practice.
AI’s role is not one process, but a collection of them, according to a 2019 article in Future Healthcare Journal. It involves machine learning, where computers get smarter at seeking patterns or connections the more data is input; natural language processing, where computers learn to read and analyze unstructured clinical notes or patient reports; robotic process automation, such as chat bots; diagnostic capabilities such as IBM’s Watson;and even more processes that help with patient adherence and administrative tasks.
“AI is impacting health care at every level, from the provider to the payer to pharma,” says Dan Riskin, M.D., CEO and founder of Verantos, a health care data company in Palo Alto, California, that uses AI to sort through real world evidence.
“AI is utilized in a multitude of ways depending on the health care ecosystem,” says Athena Robinson, Ph.D., chief clinical officer at Woebot Labs, a digital therapeutics company in San Francisco. “Some folks think of augmented systems, such as transactional bots that you call to schedule an appointment. And sometimes people are just talking about tech leverage solutions, like more seamless integration into an electronic health record (EHR)or prompting a patient for some measurement-based care. There’s a wide variety of ways people think about AI-implemented evolutions of medicine practice.”
For providers, AI can help with tasks ranging from clinical decision support to disease management, Riskin says. However, it might be especially useful to physicians for understanding population health. “If you want to identify a group where you are not meeting the standard of care or need to do better, and you find you’re having trouble identifying [these patients] with common software, you might do better with more innovative, AI-based software,” Riskin says.
Or from a patient-facing approach, AI can assist in disease management for patients with adherence problems. “If you find your patient is struggling with typical approaches … maybe they will do better with an AI-based tool that partners with them,” Riskin said.
Robinson agrees that AI will be especially helpful in extending the clinical relationship outside the office. “The benefit of an AI through an app on your smartphone is that it allows for real-time practice to meet the patient where they are,” she says.
For example, patients who can track their sleep patterns or glucose numbers or other health data at home can then share that information with their doctors in real time. “That facilitates both members of the team, both the patient who is not taxed with that kind of memory recall on the spot and the provider who can use the session time a bit more efficiently,” Robinson says.
AI will never replace physicians or other providers, but it does have undeniable strengths with which the human brain simply can’t compete. “The main strength of AI in general, not only pertaining to medicine, is its ability to digest large amounts of data to detect patterns and connections between the data points that a human wouldn’t necessarily be very good at doing,” says Theodore Zanos, PhD, head of the Neural and Data Science Lab and an assistant professor at the Feinstein Institutes for Medical Research at Northwell Health in Manhasset, New York.
AI can also make connections much more quickly than a human doctor, reading and interpreting hundreds of thousands of pages of medical records — and it’s only going to get better at it. This function has been a boon during the COVID-19 pandemic when tracking and monitoring patients’ health in real time have become crucial.
“These diagnostic models and prognostic models are trying to predict what’s going to happen to the patient based on their clinical profile now, but also [they are] comparing them with millions of other patients [who] might have had similar characteristics,” Zanos says.
For those who are concerned that patients might not trust AI, Riskin says it’s not an either/or situation. “Doctors still have to figure out what is believable and make the choice for all their patients. I wouldn’t trust something more or less if a human told me or an AI told me; I would judge it based on the quality of what it’s telling me.”
At the moment there is no unifying model that can predict all diseases and conditions. But “there are a lot of specific (AI-based) models for specific diseases and conditions and time horizons and use cases,” Zanos says.
However, because of AI’s computing power, it lends a significant hand to personalized medicine now and into the future. “The FDA just came out with draft guidance saying that traditional methods get you an accuracy of 40% to 50%, which is insufficient, and AI gets you accuracy of 80% to 85%, which is sufficient,” Riskin says.
Unfortunately, there are no ratings for these models yet, so doctors have to apply trial and error to see which ones work best, and they will need to practice using them.
“If you have very good tools to identify and diagnose at an individual level and then use AI to find the optimal treatment for the diagnosis, that’s kind of the holy grail of how we would use AI,” Zanos says.
Another factor to consider when adopting AI-based interfaces, programs and apps, Riskin says, is that younger patients have come to expect more contactless health care experiences. “They are more comfortable with an app or telemedicine, or if they are managing a condition, sensors and an app that on a daily basis will tell them how they are doing.”
Of course, no practice is going to simply shift everything to AI-based programs overnight, Zanos says. “The stakes are just a lot higher in health care. In other industries, it’s fine if you suggest the wrong product to a user or the wrong movie on Netflix — it’s not going to break your company. But you can’t just release a tool in the wild in health care. It needs to go through careful validation, approval and regulatory approval from the FDA, so the cycle is longer and it slows down the progress a bit in the field.”
Once physicians have an idea of how they’d like to use AI in their practice, there will be some tough decisions to make, Riskin says. “The first decision for the doctor is going to be: Do I want to stay with the tried-and-true, slower, large vendor or do I want to take a risk and go with a startup that’s more innovative?” he said.
He suggests sticking with legacy vendors for things such as EHRs and considering more innovative AI-based startups for things such as revenue cycle management or less patient-centric tasks.
Zanos reminds physicians that AI is an emerging field, but it is one that’s here to stay. “There are going to be growing pains. If doctors are really serious about these technologies;. It’s important to educate themselves more than just, ‘Oh there’s an algorithm on my computer that’s going to tell me everything I need to know.’ There’s value in understanding how these technologies are created,” he said. “Physicians need as much exposure and practice as they can get.”
This article originally appeared on the site MedicalEconomics.com