ECG’s radio show and podcast, Healthcare Upside/Down, offers unfiltered perspectives on what’s working in US healthcare and what’s not. Hosted by ECG principal Dr. Nick van Terheyden, each episode features guest panelists who explore the upsides and downsides of healthcare in the US—and how to make the system work for everyone.
If you have even a passing familiarity with the 1984 film “The Terminator,” you know it stars Arnold Schwarzenegger as a time-traveling cyborg assassin—a nearly indestructible killing machine, disguised as a human, that can’t be reasoned with, can’t be bargained with, and doesn’t feel pity or remorse or fear.
But beneath the movie’s car chases, shootouts, and explosions is a haunting premise: at some point in its evolution, humankind creates an artificial intelligence system so advanced, so sophisticated, that it becomes self-aware. Perceiving humans as a threat, the system attacks its makers, setting off an apocalyptic war between man and machine (spoiler alert: humanity doesn’t fare well).
Back in 1984, a plotline like that was purely science fiction. But with our advances in AI, particularly in healthcare, it raises some very down-to-earth questions: How much autonomy should medical technology have? How does the provider’s role in care delivery change? Who will have access to cutting-edge care?
The notion of machines consciously wiping out humans remains a Hollywood trope (for now, anyway). But as we increasingly rely on technology to help us make healthcare decisions and perform complex procedures, we need to ensure that our machines meet clearly defined ethical principles that protect patients from harm.
Dr. Michael Abramoff is a professor of Ophthalmology and Visual Sciences at the University of Iowa Hospital and Clinics, as well as the founder and executive chairman of Digital Diagnostics, the first company ever to receive FDA clearance for an autonomous, AI diagnostic system. The platform detects diabetic retinopathy, which causes blindness in more than 60,000 people every year, without physician input at the point of care.
The FDA was initially cool to Dr. Abramoff’s idea when he presented it in 2010. AI’s potential at the time wasn’t widely understood, and one of Abramoff’s colleagues famously dubbed him “the Retinator” (an allusion to our aforementioned cyborg), suggesting his platform would terminate the need for ophthalmologists. On episode 32 of Healthcare Upside/Down, Dr. Abramoff explains how well-designed AI will continue to augment care, not replace the individuals who provide it. Here are a few excerpts.
On helping people understand the role of AI in healthcare.
“I was given the nickname ‘the Retinator,’ like the Terminator for the retina, in 2010, in a big editorial about my work. That was about concern for job loss and quality-of-care loss. And so I think we need to be very transparent about what we’re doing here, very clear about the scientific evidence. That’s part of it. The other part is that specialists—in this case, ophthalmologists and retinal specialists—want to operate at the top of their license. Having an AI that makes a diagnosis where the patients actually are—which is in primary care and retail clinics—finding those patients that need access to high-quality care that these ophthalmologists and specialists can give, actually makes them do their work better. It is the top-of-license practicing that everyone wants.”
Metrics for ethics.
“A lot of work has been done about the ethics for AI in healthcare—what I call “metrics for ethics.” Because if you’re an engineer, and have a computer engineering backgrounds, you want to be able to measure what you do. And just saying ‘be ethical’ is not very useful to an engineer, especially when you’re trying to make an autonomous AI for use in healthcare. How much am I meeting certain ethical principles? And there’s many of them. There’s autonomy of the patient. There’s something called justice, which is about applying it equally well to all patients. And we get into liability and medical malpractice insurance—there’s patients and patient organizations, physicians and other providers, payers, regulators, bioethicists. They all have a say in how we do healthcare, what is good and bad, and what we should introduce.”
Opportunities beyond the ophthalmology.
“With the groundwork done, it’s much more easy and acceptable to go through the same process much faster. And indeed, we are seeing better outcomes, better access all over the US where we have implemented cost reduction. It’s logical to think ‘what is next?’ Well, we have a few. One is for skin cancer. Again, the front line of care is the clinic’s primary care. That’s where patients are, and that’s where they need this diagnosis. There’s many others, including cardiovascular disease. There are even things more things you can do with the retina. There’s an enormous potential now to increase this access in a number of different diagnoses. It will initially be a very narrow specialty diagnosis, but it can expand to dozens of them even in the next two to three years, I expect.”
On the podcast, Dr. Abramoff talks more about ensuring AI adheres to ethical principles and the potential for cost reduction.
Published July 5, 2022