Tuck professor Raghav Singal’s new model shows what could happen if health insurers approved care on time—and how those decisions change patient outcomes.
As you may know, you sometimes need prior authorization from your health insurer before getting a screening or treatment. It’s a common practice for health insurers to delay or deny that prior authorization, in order to protect their bottom line. But in some cases, these tactics can leave patients more vulnerable to disease and death.
In a new paper, Raghav Singal creates a model that answers an important counterfactual question for people harmed by delay-and-deny: what if the procedure was approved in a timely fashion? Using specific health care data, Raghav’s model can answer that question with striking accuracy, showing the probability that a timely screening or procedure could have had a positive impact on a patient’s health.
Research paper discussed: Bounding Counterfactual Outcomes of Health Insurance Delay-and-Deny Practices
[This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of the Tuck Knowledge in Practice Podcast is the audio record.]
Surgeon: It’s 2025, and insurance just keeps getting worse. I just did two bilateral cases in one day, and during the second one, I got a phone call in the operating room saying that UnitedHealthcare wanted me to call them about one of the patients who was having surgery—who was actually asleep, having surgery.So I scrubbed out of my case and called UnitedHealthcare. The gentleman said he needed some information about her—he wanted to know her diagnosis and whether her inpatient stay should be justified. I said, “Do you understand that she’s asleep right now, and she has breast cancer?”
The gentleman said, “Actually, I don’t—that’s a different department that would know that information.”
It’s out of control.
[Podcast introduction and music]
Kirk Kardashian: Hey, this is Kirk Kardashian, and you’re listening to Knowledge in Practice, a podcast from the Tuck School of Business at Dartmouth. In this podcast, we talk with Tuck professors about their research and teaching—and the stories behind their curiosity.
Today on the podcast, we’re speaking with Tuck operations professor Raghav Singal about new research he’s done on health insurance delay and deny practices.
As you may know, you sometimes need prior authorization from your health insurer before getting a screening or treatment. It’s common for insurers to delay or deny that authorization to protect their bottom line. But in some cases, those tactics can leave patients more vulnerable to disease and death.
In a new paper, Raghav creates a model that answers an important counterfactual question for people harmed by delay and deny practices: What if the procedure had been approved in a timely fashion? Using specific healthcare data, his model can estimate how likely it is that a timely screening or procedure could have improved a patient’s health.
Raghav Singhal is an associate professor of business administration in Tuck’s Operations and Management Science area. He completed his PhD in Operations Research at Columbia University in 2020, then spent a year as a data scientist at Amazon before joining Tuck in 2021.
Raghav’s research develops and analyzes models that help organizations evaluate complex systems and make data-informed decisions. His work has been published in peer-reviewed journals including Management Science, Operations Research, and Manufacturing and Service Operations Management.
So we’re here today to talk about your new paper, titled Bounding Counterfactual Outcomes of Health Insurance Delay and Deny Practices. The general area we’re talking about is prior authorizations—what happens when insurance companies delay or deny them, and how those decisions can affect people’s lives.
Before we dive into your paper, tell us what “delay and deny” means, and why you decided to study it.
Raghav Singal: Sure. It’s been in the news a lot recently. At a high level, “delay and deny” refers to the practice where insurance companies delay or deny coverage for medical procedures—usually through prior authorization for treatments or screenings.
These policies are intended to control costs and limit unnecessary treatments, which makes sense operationally. But they can have unintended consequences that are sometimes fatal. Delays in approval can lead to missed diagnoses, disease progression, or even preventable deaths.
For example, data show that Medicare Advantage plans denied 7.4% of prior authorization requests in 2022, and about 13% of those denials should have been approved under Medicare guidelines.
Beyond the statistics, the human cost is enormous. Doctors frequently report serious medical complications resulting from these denials. Some cases have even led to high-profile lawsuits where families argue that these practices caused irreparable harm.
This intersection of patient outcomes, questionable policies, and legal implications was a key motivation for us to study this issue. We wanted a rigorous, data-driven way to evaluate these policies and quantify their impact on patient outcomes—for example, what’s the probability that a patient would not have died had the screening gone through?
Kirk: Okay, yeah. So that’s the counterfactual question, right? A counterfactual is when you wonder what could have happened.
Raghav: Exactly.
Kirk: So you’re an operations professor—how does your approach to science and the scientific method handle a question like that?
Raghav: Right. A big part of operations research is stochastic models and optimization. Those two components form the foundation of this work.
Let me break that down. The first part involves Hidden Markov Models, which are a special class of stochastic models. The second part is causal inference, which includes a lot of optimization within it.
So, why Hidden Markov Models? Think of a patient—say, a cancer patient. The stage of cancer is always evolving; it’s dynamic. Every six months, you might do a test and get a noisy signal—it could be a false positive or false negative. You never fully know the true state of the disease. That’s why we call it “hidden.” The “Markov” part captures how the disease evolves probabilistically over time.
We use these models to represent disease progression. For example, a patient might go from Stage 1 to Stage 2 cancer—it evolves. These models are actually widely used. For instance, the National Institutes of Health and the University of Wisconsin have a breast cancer simulator that uses this type of model for policy decisions.
So we thought: people already use these models to understand how diseases evolve at a population level—could we use them to understand counterfactual outcomes? That’s the first component.
Kirk: And “stochastic” means something that changes probabilistically, right?
Raghav: Exactly. It’s about probabilities. For instance, a healthy patient might have a 1% chance of developing Stage 1 breast cancer within six months—that’s stochastic. If they’re under treatment, there’s some probability they’ll recover in that time. These probabilities are built into the model.
Now, that brings me to the second component—causal inference. A probabilistic model alone can describe how a disease evolves, but it can’t answer counterfactual questions.
For example, suppose a patient was denied treatment and later died. You want to go back in time and ask: had the screening not been denied, would they have survived? To answer that, you need to combine stochastic modeling with structural causal models.
One interesting thing we found is that you can’t always identify the exact probability a patient would have survived in a counterfactual world. But you can find bounds—say, between 10% and 30%. Those bounds come from optimization.
Sometimes the range is wide, meaning we don’t have enough information to make a precise statement. But that’s not a weakness of the framework—it’s telling us the limits of what we can infer from the available data.
However, when we start injecting domain-specific knowledge—say, known patterns from medicine—the bounds get much tighter. For example, if a patient’s cancer didn’t worsen without treatment, it’s reasonable to assume that with treatment, it would have been stable or improved. That kind of expert knowledge narrows the bounds significantly.
In some cases, we saw ranges shrink from 0–100% to something like 0–20%. That’s much more informative.
Kirk: So you use that domain-specific knowledge from medical research—like data from Medicare or cancer studies—and plug it into your framework?
Raghav: Exactly. There are two parts: the domain-specific knowledge and the actual data.
In our paper, we applied the framework to a breast cancer case study, which is especially relevant because early screening dramatically improves survival rates. We modeled the progression of breast cancer using a Hidden Markov Model, where the hidden states represent conditions such as undiagnosed or diagnosed cancer, and the signals are periodic tests.
We looked at two hypothetical patients whose screenings were delayed—say, due to incorrect insurance denials. Although the patients were hypothetical, the underlying model was calibrated with real data from sources like the University of Wisconsin Breast Cancer Simulator and other epidemiological data.
For one patient, the probability of surviving with timely screenings was very high—above 85%. That suggests the delay likely contributed significantly to their death.
For the second patient, whose cancer was more aggressive, even with timely detection the probability of survival was only about 15–20%.
So our model shows both sides: cases where denial likely caused real harm, and cases where the outcome may not have changed much regardless.
Kirk: When you think about this topic, do you imagine your framework being used mostly after the fact—like a postmortem—or could insurers use it to make better real-time decisions about approvals?
Raghav: So far, I’ve thought of it primarily as an after-the-fact tool. Something happens—you zoom in on a specific patient—and you want to understand what could have happened under a different policy. That’s what we mean by “counterfactual.”
Kirk: So in a legal case, for example—a surviving spouse might sue the insurance company, arguing that if they had approved the screening earlier, their loved one might have survived. Your model could help estimate the probability of that, right?
Raghav: Exactly. That probability is known in epidemiology as the probability of necessity. It’s a key question in cases like these: in a counterfactual world where the denial didn’t happen, what’s the probability the patient would have survived?
To my knowledge, this framework hasn’t yet been used in litigation, but there are active cases where it could apply. For example, there’s an ongoing class-action lawsuit against the Hawaii Medical Service Association, filed in 2022 by physicians and about 30 patients. They claim the insurer’s prior authorization requirements led to denials of essential services like CT and MRI scans, which harmed patient care.
In one instance, a doctor recommended an MRI, but the insurer initially refused to cover it—approving only physical therapy. By the time the MRI was finally authorized, nearly three months later, the patient was diagnosed with advanced prostate cancer and died within 18 months.
The patient’s spouse later said, “If he had gotten what his doctor wanted him to get early on, it would have been a completely different outcome.” That’s exactly the kind of counterfactual scenario our model is designed to quantify.
There have been similar cases in the past—for example, Fox v. HealthNet in the 1980s or 1990s, which resulted in an $89 million award for the plaintiffs.
Kirk: That’s a lot of money, even for back then. So what do you think are the main takeaways from your study?
Raghav: A couple of things.
First, from an academic and policy standpoint, our framework provides actionable insights for healthcare policy and legal analysis. It quantifies the impact of delay and deny practices in a rigorous, data-driven way, and it does so at the patient level.
Second, from a technical perspective, the framework is broader than healthcare. Any system that evolves dynamically, has hidden information, and invites counterfactual reasoning could use this approach. Healthcare is just one concrete example.
Kirk: I’d like to thank my guest, Professor Raghav Singhal.
You’ve been listening to Knowledge and Practice, a podcast from the Tuck School of Business at Dartmouth. Please like and subscribe, and if you enjoyed the show, leave a review—it helps people find us.
This show was recorded by me, Kirk Kardashian, and produced and sound designed by Tom Whalley. See you next time.