Take a look around any big hospital and you’ll find plenty of imposing technology: surgical robots, artificial organs, wireless brain sensors, three-dimensional imaging visualizations.
But you have to look harder to find what’s been touted for years as the future of medicine: artificial intelligence, or the use of computers to reason, learn and make critical decisions in patient care, with little or no human involvement.
The medical field’s lofty dreams of unleashing the power of artificial intelligence to transform medicine have yet to materialize in a major way. The thought of replacing doctors with machines remains a science-fiction fantasy.
Even so, health-information experts say artificial intelligence has its place and can perform valuable tasks, from helping doctors identify diseases earlier to matching call-center customers at an insurance company with the person most qualified to help.
Across central Indiana, health systems and research institutes are devoting more time to artificial intelligence, with the goal of coming up with additional tools to help doctors treat patients.
Artificial intelligence is not a single technology. The umbrella term includes deep learning, machine learning, natural language processing and other methods of performing so-called smart tasks.
The challenge: to use those tools to sort through and analyze the mountains of medical data that have been piling up as a result of the explosion of digital health information in recent decades. Massive amounts of data—from electronic health records, public-health data and personal technology—could contain clues for prevention and treatment of disease, if only the right software can make sense of it all.
“I believe there are discoveries hidden in these piles of data that are going to change the world,” said Dr. Shaun Grannis, director of the Center for Biomedical Informatics at the Regenstrief Institute, a medical-research organization based in Indianapolis. “Machine-learning methods are the wrenches and the screwdrivers and the magnifying lenses to go into the data and find those discoveries in ways that no human could do on their own.”
Health systems, however, are moving relatively slowly to implement artificial intelligence. Gartner Inc., a Connecticut-based global research and advisory company, surveyed 50 chief information officers at U.S. health care organizations last year. Only 11 percent of respondents had implemented some type of artificial intelligence capability, such as diagnostic imaging or algorithmic medicine. But 50 percent said they have plans to implement some type of artificial intelligence over the next two years.
Yet artificial intelligence can be a powerful tool in global health, according to a report issued in April by the U.S. Agency for International Development and the Rockefeller Foundation. Computers can mine through millions of health records in minutes to spot health dangers or suggest treatments, the report said.
“From enabling community-health workers to better serve patients in remote areas to helping governments in low- and middle-income countries prevent deadly disease outbreaks before they occur, there is growing recognition of the tremendous potential of AI tools,” the report said.
One way to think about artificial intelligence is to see how consumer companies such as Amazon and Netflix use it.
When you order a book or a movie online, Amazon or Netflix can instantly look at your order history, suggest other items to buy, or alert you to specials. Neither company uses humans to review those suggestions before they pop up on your screen.
In health care, however, the stakes are much higher—sometimes a question of life or death. While a machine can review your medical history, few, if any, doctors would be comfortable stepping out of the process and letting a computer algorithm diagnose your symptoms.
“There’s still a resistance among health care across the nation to not let the computer make the decisions for the providers,” said Charles Randolph, a nurse and director of clinical informatics at Franciscan Health Indianapolis.
For the time being, artificial intelligence in health care has been mostly limited to lower-level tasks, such as automating prescription refills, coding medical procedures and sorting through employee benefits programs. It can do those jobs quickly, taking the strain off of doctors who feel overwhelmed with those administrative tasks.
It can also do a quick read through a pile of patient records and identify patterns, say in a patient’s blood cell count, that will give a doctor more information to develop a treatment plan.
“Artificial intelligence can raise red flags or make suggestions or identify patterns that simplify data that you, as a doctor, could never do in your head,” said Dr. Chris Callahan, chief research and development officer at Eskenazi Health.
That would likely lead to a conversation between the doctor and patient. But it would not substitute for a conversation or for the doctor’s professional opinion, he said.
Artificial intelligence could also review a doctor’s treatment plan against huge amounts of scientific evidence to see whether it’s the suitable course for a specific patient condition.
But along with all the small, mundane tasks, there are signs that artificial intelligence can be used to provide big insights.
Last year, researchers in Indianapolis wanted to figure out whether Eskenazi Health’s practice of offering a wide array of “wraparound services”—such as nutrition counseling, social work, patient navigation and other non-medical offerings—could improve patient outcomes and reduce hospitalizations.
So they studied 11 years of data from Eskenazi Health’s electronic health record system, which contained information on visits, diagnoses and treatments. They also pulled information from the Indiana Network for Patient Care, the oldest and largest health information exchange in the nation, which allows for tracking of patients across different providers.
The study looked at 14,094 adults who received care at Eskenazi from 2006 to 2016, and who had at least one primary-care visit before and one visit after 2011. All patients in the study sample had at least one wraparound service.
What the researchers found was a 5% reduction in emergency department visits when Eskenazi provided a wraparound service. They also saw fewer hospitalizations, saving the system millions of dollars a year. The study results were published in the October issue of the journal Health Affairs.
Researchers at Eskenazi Health, the Regenstrief Institute and the Richard M. Fairbanks School of Public Health at IUPUI estimated that the benefits from those wraparound services saved up to $8.2 million from 2011 to 2016, based on median hospital costs. That estimate represents an average of $1.4 million to $2.4 million in savings a year.
So distilling knowledge from a sprawling set of information can influence a hospital’s operations. But where AI seems to be more fraught with medical and legal risks is in decisions directly affecting patient care.
A computer, for example, could tell a doctor a patient might be suffering from septic shock, a potentially fatal condition caused by the body’s defense system working too hard to fight infections.
“What happens if the algorithm tells you the patient is septic, and they’re really not?” said Dr. Patrick McGill, chief analytics officer at Community Health Network. “You give them treatment and they have a harm. Who is responsible?”
Under the law, he said, doctors are responsible, meaning they are not likely to take the conclusions of a computer algorithm without studying them hard to make sure they are correct.
On the other hand, if a doctor ignores the computer’s advice, and it turns out the computer is right, the doctor might be liable for that as well, he said.
Gartner, the Connecticut advisory firm, predicted that, by 2020, the first medical malpractice case regarding AI will be heard—either in a case where the algorithm was used when it shouldn’t have been or where it was used incorrectly.
“There’s still all these ethical concerns,” McGill said, “and I think that’s why it’s been very slow to be adopted.”
Where artificial intelligence use is less risky is in a health system’s back office and support systems, he added. There, the machines can help a medical practice figure which patients are likely to skip a treatment due to family reasons, transportation challenges or other reasons, so a nurse or assistant can reach out to help patients make arrangements in advance.
The machines can also help doctors code procedures more accurately or identify bills that are not likely to be collected.
But even for the more ambitious players, the AI path can be filled with expensive setbacks. In 2013, MD Anderson, the cancer center based at the University of Texas, teamed up with IBM with the goal of using the Watson supercomputer “to eradicate cancer,” according to its rollout announcement.
But just three years later, after spending more than $60 million, MD Anderson was forced to scrap the project, known as the Oncology Expert Advisor, after it failed to meet any of its goals.
The supercomputer often spit out erroneous treatment advice. Company medical specialists and customers identified “multiple examples of unsafe and incorrect treatment recommendations” as IBM was promoting the product to hospitals and physicians around the world, according to separate reports in The Wall Street Journal and the online medical news site Stat.
It was an embarrassing end to a high-profile collaboration. Some health information experts in Indianapolis say the project illustrates the challenge of managing huge amounts of data—and verifying that the data is high-quality and reliable.
“The problem with it was that—when you fed a very, very ambiguous data set into Watson, despite the prowess of all of the research scientists and all the clinicians who contributed to the Watson database—you ended up with a garbled mess,” said Dr. Seung Park, chief health information officer at Indiana University Health.
He added that the quality of data is vital and remains one of the biggest challenges.
“Artificial intelligence can be thought of as not that dissimilar to toddlers who learn incredibly fast,” Park said. “However, unlike toddlers, they learn exactly what you feed into them. So if you feed in data that is erroneous, or ambiguous, even the best artificial intelligence techniques that exist today aren’t generally able to reason their way past the ambiguities that the human mind generally does, as a matter of course.”•