New AI-powered packages provide assist to sufferers stymied by insurance coverage denials.
By Anna Claire Vollers for Stateline
As states try to curb well being insurers’ use of synthetic intelligence, sufferers and docs are arming themselves with AI instruments to combat claims denials, prior authorizations and hovering medical payments.
A number of companies and nonprofits have launched AI-powered instruments to assist sufferers get their insurance coverage claims paid and navigate byzantine medical payments, making a robotic tug-of-war over who will get care and who foots the invoice for it.
Sheer Well being, a three-year-old firm that helps sufferers and suppliers navigate medical health insurance and billing, now has an app that enables customers to attach their medical health insurance account, add medical payments and claims, and ask questions on deductibles, copays and coated advantages.
“You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500,” mentioned cofounder Jeff Witten. This system makes use of each AI and people to offer the solutions without spending a dime, he mentioned. Sufferers who need further assist in difficult a denied declare or coping with out-of-network reimbursements will pay Sheer Well being to deal with these for them.
In North Carolina, the nonprofit Counterforce Well being designed an AI assistant to assist sufferers attraction their denied medical health insurance claims and combat giant medical payments. The free service makes use of AI fashions to research a affected person’s denial letter, then look by the affected person’s coverage and out of doors medical analysis to draft a personalized attraction letter.
Different consumer-focused companies use AI to catch billing errors or parse medical jargon. Some sufferers are even turning to AI chatbots like Grok for assist.

An individual’s ChatGPT historical past is seen at a espresso store in Arkansas in July.
1 / 4 of adults underneath age 30 mentioned they used an AI chatbot at the least as soon as a month for well being info or recommendation, in line with a ballot the well being care analysis nonprofit KFF printed in August 2024. However most adults mentioned they weren’t assured that the well being info is correct.
State legislators on either side of the aisle, in the meantime, are scrambling to maintain tempo, passing new rules that govern how insurers, physicians and others use AI in well being care. Already this yr, greater than a dozen states have handed legal guidelines regulating AI in well being care, in line with Manatt, a consulting agency.
“It doesn’t feel like a satisfying outcome to just have two robots argue back and forth over whether a patient should access a particular type of care,” mentioned Carmel Shachar, assistant medical professor of regulation and the college director of the Well being Regulation and Coverage Clinic at Harvard Regulation Faculty.
“We don’t want to get on an AI-enabled treadmill that just speeds up.”
A black field
Well being care can really feel like a black field. In case your physician says you want surgical procedure, for instance, the price will depend on a dizzying variety of components, together with your medical health insurance supplier, your particular well being plan, its copayment necessities, your deductible, the place you reside, the power the place the surgical procedure will probably be carried out, whether or not that facility and your physician are in-network and your particular analysis.
Some insurers might require prior authorization earlier than a surgical procedure is authorised. That may entail intensive medical documentation. After a surgical procedure, the ensuing invoice could be tough to parse.
Witten, of Sheer Well being, mentioned his firm has seen hundreds of cases of sufferers whose docs advocate a sure process, like surgical procedure, after which a number of days earlier than the surgical procedure the affected person learns insurance coverage didn’t approve it.
Lately, as extra medical health insurance corporations have turned to AI to automate claims processing and prior authorizations, the share of denied claims has risen. This yr, 41% of physicians and different suppliers mentioned their claims are denied greater than 10% of the time, up from 30% of suppliers who mentioned that three years in the past, in line with a September report from credit score reporting firm Experian.
Insurers on Reasonably priced Care Act marketplaces denied almost 1 in 5 in-network claims in 2023, up from 17% in 2021, and greater than a 3rd of out-of-network claims, in line with essentially the most just lately accessible knowledge from KFF.
Insurance coverage big UnitedHealth Group has come underneath fireplace within the media and from federal lawmakers for utilizing algorithms to systematically deny care to seniors, whereas Humana and different insurers face lawsuits and regulatory investigations that allege they’ve used subtle algorithms to dam or deny protection for medical procedures.
Insurers say AI instruments can enhance effectivity and scale back prices by automating duties that may contain analyzing huge quantities of knowledge. And corporations say they’re monitoring their AI to determine potential issues. A UnitedHealth consultant pointed Stateline to the corporate’s AI Assessment Board, a group of clinicians, scientists and different consultants that opinions its AI fashions for accuracy and equity.
“Health plans are committed to responsibly using artificial intelligence to create a more seamless, real-time customer experience and to make claims management faster and more effective for patients and providers,” a spokesperson for America’s Well being Insurance coverage Plans, the nationwide commerce group representing well being insurers, instructed Stateline.

An insurance coverage agent talks with purchasers in 2024.
However states are stepping up oversight.
Arizona, Maryland, Nebraska and Texas, for instance, have banned insurance coverage corporations from utilizing AI as the only real decisionmaker in prior authorization or medical necessity denials.
Dr. Arvind Venkat is an emergency room doctor within the Pittsburgh space. He’s additionally a Democratic Pennsylvania state consultant and the lead sponsor of a bipartisan invoice to control using AI in well being care.
He’s seen new applied sciences reshape well being care throughout his 25 years in drugs, however AI feels wholly totally different, he mentioned. It’s an “active player” in individuals’s care in a manner that different applied sciences haven’t been.
“If we’re able to harness this technology to improve the delivery and efficiency of clinical care, that is a huge win,” mentioned Venkat. However he’s fearful about AI use with out guardrails.
His laws would power insurers and well being care suppliers in Pennsylvania to be extra clear about how they use AI; require a human to make the ultimate choice any time AI is used; and mandate that they present proof of minimizing bias of their use of AI.
“In health care, where it’s so personal and the stakes are so high, we need to make sure we’re mandating in every patient’s case that we’re applying artificial intelligence in a way that looks at the individual patient,” Venkat mentioned.
Affected person supervision
Traditionally, customers not often problem denied claims: A KFF evaluation discovered fewer than 1% of well being protection denials are appealed. And even when they’re, sufferers lose greater than half of these appeals.
New consumer-focused AI instruments might shift that dynamic by making appeals simpler to file and the method simpler to know. However there are limits; with out human oversight, consultants say, the AI is susceptible to errors.
“It can be difficult for a layperson to understand when AI is doing good work and when it is hallucinating or giving something that isn’t quite accurate,” mentioned Shachar, of Harvard Regulation Faculty.
For instance, an AI instrument would possibly draft an appeals letter {that a} affected person thinks seems to be spectacular. However as a result of most sufferers aren’t medical consultants, they could not acknowledge if the AI misstates medical info, derailing an attraction, she mentioned.
“The challenge is, if the patient is the one driving the process, are they going to be able to properly supervise the AI?” she mentioned.
Earlier this yr, Mathew Evins discovered simply 48 hours earlier than his scheduled again surgical procedure that his insurer wouldn’t cowl it. Evins, a 68-year-old public relations govt who lives in Florida, labored together with his doctor to attraction, however received nowhere. He used an AI chatbot to draft a letter to his insurer, however that failed, too.
On his son’s advice, Evins turned to Sheer Well being. He mentioned Sheer recognized a coding error in his medical information and dealt with communications together with his insurer. The surgical procedure was authorised about three weeks later.
“It’s unfortunate that the public health system is so broken that it needs a third party to intervene on the patient’s behalf,” Evins instructed Stateline. However he’s grateful the know-how made it doable to get life-changing surgical procedure.
“AI in and of itself isn’t an answer,” he mentioned. “AI, when used by a professional that understands the issues and ramifications of a particular problem, that’s a different story. Then you’ve got an effective tool.”
Most consultants and lawmakers agree a human is required to maintain the robots in verify.
AI has made it doable for insurance coverage corporations to quickly assess instances and make choices about whether or not to authorize surgical procedures or cowl sure medical care. However that potential to make lightning-fast determinations must be tempered with a human, Venkat mentioned.
“It’s why we need government regulation and why we need to make sure we mandate an individualized assessment with a human decisionmaker.”

Witten mentioned there are conditions during which AI works effectively, resembling when it sifts by an insurance coverage coverage — which is basically a contract between the corporate and the buyer — and connects the dots between the coverage’s protection and a corresponding insurance coverage declare.
However, he mentioned, “there are complicated cases out there AI just can’t resolve.” That’s when a human is required to evaluate.
“I think there’s a huge opportunity for AI to improve the patient experience and overall provider experience,” Witten mentioned. “Where I worry is when you have insurance companies or other players using AI to completely replace customer support and human interaction.”
Moreover, a rising physique of analysis has discovered AI can reinforce bias that’s discovered elsewhere in drugs, discriminating towards girls, ethnic and racial minorities, and people with public insurance coverage.
“The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in ways that we have already legislated against,” Venkat mentioned.