The feds are making clear to insurers that AI can’t be used to deny health care coverage

The feds are making clear to insurers that AI can’t be used to deny health care coverage

A nursing home resident is pushed along the corridor by a nurse.
Zoom in / A nursing home resident is pushed along the corridor by a nurse.

Health insurers cannot use algorithms or artificial intelligence to determine care or deny coverage to members in Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) explained in a memo sent to all Medicare Advantage insurers.

The memo — which is formatted as a FAQ about Medicare Advantage (MA) plan rules — comes just months after patients filed lawsuits alleging that UnitedHealth and Humana are using a deeply flawed, AI-powered tool to deny care to elderly patients in MA plans. The lawsuits, which seek class-action status, center on the same artificial intelligence tool, called nH Predict, used by both insurers and developed by NaviHealth, a subsidiary of UnitedHealth.

According to the lawsuits, nH Predict produces strict estimates of how long a patient will need post-acute care in facilities such as skilled nursing homes and rehabilitation centers after an acute injury, illness or event, such as a fall or stroke. NaviHealth employees face discipline for deviating from estimates, even though they often do not match prescribing doctors’ recommendations or Medicare coverage rules. For example, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients in UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving a payment denial, the lawsuits allege.

Specific warning

It’s unclear exactly how nH Predict works, but it reportedly uses a database of 6 million patients to develop its predictions. However, according to people familiar with the program, it represents only a small set of patient factors, and not a complete look at a patient’s individual circumstances.

This is a clear no-no, according to the CMS memo. For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set rather than an individual patient’s medical history, physician recommendations, or clinical observations will not be compatible.” Content management system books.

CMS then presented a hypothesis consistent with the circumstances set forth in the lawsuits, writing:

In an example involving a decision to terminate post-acute care services, an algorithm or software tool may be used to assist providers or MA plans in predicting the likely length of stay, but this prediction alone cannot be used as a basis for terminating post-acute care services. Acute care services.

Instead, CMS wrote, for an insurer to terminate coverage, the patient’s individual case must be re-evaluated, and the denial must be based on publicly posted coverage criteria on a non-password-protected website. In addition, insurers that deny care must “provide a specific and detailed explanation why the services are no longer reasonable and necessary or are no longer covered, including a description of applicable coverage standards and rules.”

In the lawsuits, patients claimed that when they were unexpectedly wrongfully denied coverage for doctor-recommended care, insurers did not provide them with full explanations.

devotion

In general, CMS finds that AI tools Can They can be used by insurers when evaluating coverage – but really only as a check to make sure the insurer is following the rules. “Only the algorithm or software tool should be used to ensure accuracy” with the coverage standards, CMS wrote. Because “publicly published coverage criteria are static and unchanging, AI cannot be used to change coverage criteria over time” or apply hidden coverage criteria.

CMS avoids any debate about what counts as artificial intelligence by offering a broad warning about algorithms and artificial intelligence. “There are many overlapping terms used in the context of rapidly evolving software tools,” CMS wrote.

Algorithms can include a critical flowchart of a series of “if-then” statements (e.g., if a patient has a particular diagnosis, they should be able to have a test), as well as predictive algorithms (predicting the likelihood of future admission, e.g. ). Artificial intelligence has been defined as a machine-based system that can, for a given set of human-specified goals, make predictions, recommendations, or decisions that affect real or virtual environments. AI systems use machine and human input to make sense of real and virtual environments; Summarizing these perceptions into models through automated analysis; Use typical heuristics to formulate information or action choices.

CMS has also publicly expressed concern that using any of these types of tools could promote discrimination and bias, which has already happened with racial bias. CMS warned insurers to ensure that any AI tool or algorithm they use “does not perpetuate or exacerbate existing bias, or introduce new biases.”

While the memo was generally an explicit clarification of existing MA rules, CMS ended up notifying insurers that it is increasing its audit activities and “will closely monitor whether MA plans use and apply internal coverage standards not found in “Medicare”. Laws.” Non-compliance can result in warning letters, corrective action plans, financial penalties, and registration and marketing penalties.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *