By Neville M. Bilimoria
Neville M. Bilimoria is a partner in the health law practice group in the Chicago office of Duane Morris.
Here comes the logical evolution of Big Data in health care: artificial intelligence. For years, this has been the holy grail of Big Data health-care technology: Gather as much data as you can, more than you can ever imagine about a patient and his or her treatment, and you will have the ability to analyze and perform better outcomes for the patient and better health care overall at less cost.
As the push toward wellness becomes more evident in U.S. health care, the natural progression was to utilize technology and Big Data to improve it. However, now we must progress to the inevitable next step with Big Data: artificial intelligence.
Yes, AI is no longer something you see in the movies, but a real-world application, especially in the area of health care, that is a driving force in private equity health transactions and the proliferation of health care private equity dollars.
And what about those movies? “Terminator,” “iRobot,” “Ex Machina” and countless others depicting a futuristic society in which artificial intelligence runs awry, with computers taking over human civilization. Far-fetched? Stephen Hawking, Elon Musk and Bill Gates don’t think so. Hawking stated that, “Success in creating [artificial intelligence] would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Musk calls AI “our greatest existential threat” and Gates warned that the present beneficial effects of AI could be superseded decades from now when it will threaten human jobs and pose dangerous threats to human civilization.
But we are a society caught in the now. I think there is some misunderstanding about what is truly AI. True, AI is something more than a computer reading thousands of X-rays or scans and then categorizing anomalies on scans in one pile vs. another: Yes or No cancer, for example. That is more akin to what Gill Eapen, a predictive analytics expert from Stout, calls “statistical categorization.”
That is nothing more than data in, data out. But true AI is something more than statistical categorization. It is affirmative machine learning that allows the machine to develop its own algorithms based on data, improve those algorithms on its own and deliver output in the form of real time diagnosis and treatment.
Take for example the world’s most famous AI machine or robot: Watson, IBM’s AI machine who won the game show “Jeopardy!” five years ago, beating two “Jeopardy!” champions. Not impressed, since Watson has the entire Library of Congress at its disposal and can consume 1 million books a second? Well what about Watson attending medical school?
On a recent “60 Minutes” television show on artificial intelligence that aired on June 25, CBS reported that Watson attended medical school at University of North Carolina and learned to analyze scans and images of cancer patients to detect anomalies and cancer. But Watson was also learning. Watson participated in cancer tumor boards with physicians. In a remarkable 30 percent of patients, Watson offered better diagnosis and treatment than the tumor board of physicians, mainly due to Watson’s uncanny ability to recognize additionally published literature and studies that the physicians simply could not encompass in their analysis. Watson even surprised the most skeptical physicians at the medical school.
So how can we control AI and bring it to the marketplace in health care? First, AI has to gain the support of skeptical physicians. That skepticism will only be lifted through physician real world utilization and experience with AI and patients and physician trust in the machine learning algorithms that are at the heart of AI. Radiologists are already using AI to assist them in clinical decision-making according to a recent article in Modern Healthcare, “Artificial Intelligence Takes on Medical Imaging,” in the July 10 issue and delivering superior results. And hospitals are enjoying the consistency of AI technology and improved reliability in medical imaging.
Second, the health-care marketplace must decide how to regulate AI. For example, do AI machines or robots have to obtain a medical license in each state to “practice medicine” and treat patients? That might be a far-fetched idea for consideration down the road, but for now, the Food & Drug Administration is focused on smart apps and wearables that do more than just spit out data.
The FDA is concerned about AI devices that diagnose cancer or predict heart attacks, for example. In fact, in May, it has been reported that the FDA has assembled a team to oversee the current AI revolution in health care, headed by Associate Center Director for Digital Health Bakul Patel. But the FDA would be a cumbersome agency to regulate AI as any software changes for a medical device, like AI, would have to be continuously reported and approved by the FDA. But with AI changing its “software” data almost instantly, second by second, it is not feasible that FDA regulations will be able to keep up with AI, at least in the current state of FDA’s rules.
But think what you will about AI and its future, AI is here to stay. Health care will embrace AI one way or another. Why? Just look at AI’s effect on patient outcomes, outcomes that are surpassing the best teams of doctors, outcomes that can save patient lives and, yes, deliver better health care than humans.