AI Miracles Won’t Transform Healthcare … But Another Miracle Will
June 12, 2023
Barely a day goes by without a headline touting how some new generative AI system will transform our lives by detecting cancer, expediting diagnosis, or helping doctors provide better care. But I’d bet my house that none of these claims will turn out to be true.
ChatGPT, Google Bard, Bing AI, and their ilk are large language models (LLM)—software designed to reproduce language by digesting huge libraries of text and then predicting the likelihood that any given word is preceded or followed by another specific word. LLM are not intelligent. In fact, experienced AI researchers, like Dan McQuillan, a computer scientist at the University of London, have described these models as “bullshit generators”. They can generate text that sounds plausibly like English, but they cannot tell the truth from lies. Humans only know the “truth” by differentiating reliable sources from unreliable ones. LLM are trained by scraping text off the internet. How could a system trained on data like that know what is true? (LLM should be distinguished from other machine learning systems, which are often similarly referred to as “AI” but perform more specialized functions, like identifying associations between variables in very large data sets. Indeed, we find these “AI” techniques can be quite useful, particularly in the hypothesis-generating stage of analytic studies.)
“ChatGPT and [its] ilk are not intelligent. They can generate text that sounds plausibly like English, but they cannot tell truth from lies.”
It’s hard to ignore the chorus of voices like those at Andreessen Horowitz, the renowned venture capital firm, who suggest that AI will address the “high cost, waste, and poor clinical outcomes” of our current healthcare system. This kind of hyperbolic, evidence-free statement has a specific purpose: increasing the monetary value of companies invested in by these same people. And it works like a charm—Nvidia just reached a valuation of $1 trillion by riding the AI hype, making a small group of venture capitalists obscenely rich while doing absolutely nothing for anyone else. The hype around AI rivals what we saw around crypto, which was also relentlessly hyped by Andreessen Horowitz but turned out to be one of the biggest theft machines in history ($12.2bn lost to schemes and scams in just over two years, according to a widely followed tech website).
On one level, this is just finance capitalism at work—people invest in a business, the business promises some amazing product, its valuation rises, and the original investors unload their positions and amass a pile of money in the process. But this cycle harms people in the here and now in some very specific ways.
First, it spreads the myth of magic solutions to real problems. Is healthcare too expensive? Let’s use AI to “give valuable time back to your doctor” and have a chatbot respond to patient emails.
JAMA Internal Medicine recently published a pilot test of a LLM answering patient messages for primary care physicians, stating that “ChatGPT scored better than real doctors” when responding to patient questions. How could a bullshit machine answering healthcare questions be anything but an unmitigated disaster? It can’t, but it doesn’t matter because the investors (at least two of whom were authors of the study in question) will have taken their profits by then, leaving us with a worse version of healthcare and them a little (or a lot) richer.
Consider these chilling, and far more realistic, examples of what happens when you try to use AI to do the work of clinicians. In the first example, Dr. Jeremy Faust of Medpage Today asked OpenAI a series of questions about a medical case. Initially, it did an impressive job of suggesting possible diagnoses, but then it made up a key fact. Worse, the system “took a real journal…[a]nd it confabulated out of thin air a study that would apparently support” this made up information. Yikes. In the second example, an AI chatbot on the website for the National Eating Disorders Association was supposed to help people at risk for eating disorders but instead started dispensing diet advice!
Another clear and present danger of using these LLM in medicine is a very consequential loss of privacy. These systems need to be trained, and guess where the training data is coming from? A recently published paper reports that researchers managed to get “90 billion words of text from…clinical notes” on which to train its LLM without the permission of even a single patient. Picture a giant sausage grinder with all our most private information going in one end and massive profits for a few people coming out the other.
That’s pretty bad. But to me, the worst part is that when garbage headlines like “AI to replace doctors” are in every paper, something important gets drowned out: there really are miracles happening in healthcare, and they don’t involve bullshit machines.
“When garbage headlines like “AI to replace doctors” are in every paper, something important gets drowned out: there really are miracles happening in healthcare, and they don’t involve bullshit machines.”
Cancer death rates have fallen more than 30% over the last 30 years—and continue to fall. That’s what I would call a miracle. Doctors at Memorial Sloan Kettering Cancer Center recently reported the early success of a pancreatic cancer vaccine—half of the treated patients were cancer-free 18 months later. Curing cancer with a vaccine! Gene therapy is on the verge of functionally curing some kinds of hemophilia. COVID vaccinations prevented 3 million deaths in the US alone. Those are miracles.
Life science companies spend every single day doing the exact opposite of what these AI systems do—instead of recycling old text into new material, they do actual research and find cures for diseases old and new. It’s maddening to hear nonstop nonsense from the tech industry hyping a future in which our privacy gets turned into their money. Particularly since the pharmaceutical industry is filled with people spending every day trying to actually improve lives.
Dr. Michael Broder, a board-certified obstetrician, and gynecologist, has 30 years of experience in health economics and outcomes research. He received his research training in the Robert Wood Johnson Clinical Scholars Program at UCLA and RAND, attended medical school at Case Western Reserve University, and received his undergraduate degree from Harvard University.
In 2004, Dr. Broder founded PHAR, a clinically-focused health economics and outcomes research consultancy. PHAR is a team of dedicated, highly trained researchers —individuals who are singularly focused on delivering high-quality health economics and outcomes research insights to the life science industry. PHAR has successfully conducted hundreds of studies resulting in more than 800 publications on a wide variety of therapeutic areas and maintains an expansive network of collaborators, including 8 of the top 10 academic institutions in the US, as measured by NIH funding. Download our bibliography here.
Unencumbered by corporate bureaucracy, PHAR can efficiently execute contracts and complete projects on time and on budget. PHAR prides itself on being reliable and responsive to clients’ changing needs and welcoming the challenge of tackling problems others can’t.