Migraines are the bane of my life, but I’ve gotten relief from Botox injections to trigger points. This was working fantastically for a year, until suddenly my insurance company decided to deny the prior authorization (PA) request. It took six months to get it authorized again during which time, I had an increase in headache frequency and severity. The coverage policy didn’t change, so the only explanation of the denial is that I stopped meeting the criteria used for coverage. That got me wondering what criteria was used. Every time I got to the specialist, I fill out a questionnaire about headache frequency. It can’t escape notice that as my headaches got less frequent/severe, Botox stopped being approved. If I was creating an algorithm to determine which patients need Botox, headache frequency and severity would be one of the criteria. However, it would be tragic if benefitting from a treatment meant that a patient could no longer get approved for it. I don’t know if that’s what happened, but now that I look back on things, I bet I am right. It’s totally fine to create logical criteria to decide whether expensive interventions are warranted. However, it’s also easy to see how these processes could hurt patients if there is no human being involved to ensure that they are clinically appropriate. That’s why I noticed this release from the American Medical Association (AMA) about how payer use of “automated decision-making” may be increasing denial of care.
In the AMA’s 2024 physician survey of the impact of PI on physicians and patients, particular concern was raised about the use of “automated decision-making tools” to process claims. This application of Artificial Intelligence (AI) may be causing big problems and is one of the many topics included in the AMA’s position statement around “Augmented Intelligence” (confusingly abbreviated as “IA”). These terms are sometimes used interchangeably but there is, or there is supposed to be, a difference. AI aims to replace human cognitive functions by a machine so that human interaction is minimal. Augmented Intelligence (IA) on the other hand is intended to enhance human functions so that human productivity is increased. IA can streamline workflows, automate routine tasks, and reduce manual errors – all of which are intended to support human cognitive functions but not replace them. That description may or may not be how payers are using “automated decision-making” for prior authorization.
Here’s a quote from the AMA’s article on Augmented Intelligence Development, Deployment, and Use in Health Care.
“There have been numerous reports recently regarding the use of what has been termed “automated decision-making tools” by payors to process claims. However, numerous reports regarding the use of these tools show a growing tendency toward inappropriate denials of care or other limitations on coverage. Reporting by ProPublica claims that tools used by Cigna denied 300,000 claims in two months, with claims receiving an average of 1.2 seconds of review.4 Two class action lawsuits were filed during 2023, charging both United Health Care and Humana with inappropriate claims denials resulting from use of the nHPredict AI model, a product of United Health Care subsidiary NaviHealth. Plaintiffs in those suits claim the AI model wrongfully denied care to elderly and disabled patients enrolled in Medicare Advantage (MA) plans with both companies. Plaintiffs also claim that payors used the model despite knowing that 90 percent of the tool’s denials were faulty.”
There is growing concern among patients and physicians about an apparent increase in the frequency of inappropriate denials of care resulting from automated decision-making tools. There are currently no statutory and only limited regulatory requirements addressing the use of AI and other automated decision-making tools by payors.
For me as a patient, I am tempted to change my answers to the headache questionnaire and not admit that I am better after treatment. But what if there’s an algorithm that will deny the treatment next time because it didn’t appear to work? I suspect that no matter what I do, there’s an algorithm that ends in “denial.”
–Caroline
Resources:
Here’s an article about the difference between AI and IA: Distinguishing Augmented Intelligence Vs Artificial Intelligence: Understanding The Key Differences | AI Expert Insight

Dr. Fife is a world renowned wound care physician dedicated to improving patient outcomes through quality driven care. Please visit my blog at CarolineFifeMD.com and my Youtube channel at https://www.youtube.com/c/carolinefifemd/videos
The opinions, comments, and content expressed or implied in my statements are solely my own and do not necessarily reflect the position or views of Intellicure or any of the boards on which I serve.