ECG’s radio show and podcast, Healthcare Upside/Down, offers unfiltered perspectives on what’s working in US healthcare and what’s not. Hosted by ECG principal Dr. Nick van Terheyden, each episode features guest panelists who explore the upsides and downsides of healthcare in the US—and how to make the system work for everyone.
Early computing was based on programming languages that incorporated simple logic statements: if this happens, then do this; otherwise, do that. But the techniques and capabilities have moved far beyond that, and we now have high-level tools that can ingest large amounts of content and pull it together into some proxy of knowledge.
Do you remember Deep Blue, IBM’s supercomputer, beating Garry Kasparov at chess in 1997? That was considered a milestone at the time. But progress continued to be made, and IBM scored again in 2011 with Watson—the Jeopardy-winning AI computer that defeated former champions Brad Rutter and Ken Jennings. Many people thought we were well on our way to artificial intelligence (AI) solving all of the world’s problems. Not so much.
The trajectory for chess is informative for where this technology might go with healthcare. At the time of Deep Blue, many predicted the demise of chess and chess players. But the game and interest are very much alive, though in a more blended and creative way. There are tournaments with no AI or computer support allowed, but there are many that allow the use of computers and even team-based approaches with computers and humans.
With the advent of ChatGPT, we once again find ourselves discussing the potential and pitfalls of AI. Will it help us work more efficiently? Will it torpedo academia with its ability to churn out convincing papers?
Dr. John Lee is an emergency physician and digitician, and he’s served as a clinical informaticist and chief information officer at a number of health organizations. On episode 68 of Healthcare Upside/Down, he helps us approach ChatGPT with the right expectations.
How to stop worrying and love AI.
“The analogy that may resonate with some people is that, just as fully automated cars are pretty far in the future, the technology that’s being put into that effort is going to make driving safer. Things like adaptive cruise control, or lane assist—those are the same technologies that are feeding into fully automatically driven cars. With ChatGPT, I don’t think we’re going to generate full [academic] papers. But in my case, I put in a fair amount of work on the front end, assembling and curating [academic] papers, and I kind of mashed them together. I put that into ChatGPT and instructed it to [create] a 500- to 1,000-word summary. And then in about 30 seconds, out came very usable output, which I still had to massage a bit on the back end. So again, not fully automated, but it can take a lot of the friction and the work out of the process.”
The need for a human touch.
“The key is guardrails. For instance, if you say, ‘go fetch the top 50 emergency medicine papers over the past year,’ I think [ChatGPT is] certainly capable of doing that. But you need to be a human to quality-control it. As a practicing emergency physician who keeps up on literature, I can say, ‘I’m familiar with these 25. I’m not sure about these other 25. I’m going to quickly peruse them, and oh look, three of them are completely fake and another five are poorly written.’ And that’s where I think it’s the combination of the human and [AI], using the machine to off-load a lot of that arduous work.”
AI + Human
“I’m not sure who coined this, but I think it’s really appropriate: it’s not AI versus human—it’s AI plus human. A physician isn’t going to be replaced by artificial intelligence. But a physician who uses artificial intelligence is going to replace a physician who doesn’t.”
On the podcast, Dr. Lee explains how AI and human expertise can work together to improve patient care.
Edited by: Matt Maslin
Published March 27, 2023