Personalized medicine reconsidered: limits of artificial intelligence in clinical trials

Personalized medicine reconsidered: limits of artificial intelligence in clinical trials

summary: A new study reveals limitations in the current use of mathematical models for personalized medicine, especially in the treatment of schizophrenia. Although these models can predict patient outcomes in specific clinical trials, they fail when applied to different trials, challenging the reliability of AI-based algorithms in diverse settings.

This study underscores the need for algorithms to prove their effectiveness in multiple contexts before they can truly be trusted. The findings highlight a significant gap between the potential of personalized medicine and its current practical application, especially given the variability in clinical trials and real-world medical settings.

Key facts:

  1. Mathematical models currently used in personalized medicine are effective in specific clinical trials but fail to generalize across different trials.
  2. The study raises concerns about the application of artificial intelligence and machine learning in personalized medicine, especially in conditions such as schizophrenia where response to treatment varies widely between individuals.
  3. The research suggests that more comprehensive data sharing and the inclusion of additional environmental variables could improve the reliability and accuracy of AI algorithms in medical treatments.

source: Yale

The search for personalized medicine, a medical approach in which practitioners use a patient’s unique genetic profile to design individualized treatment, has emerged as a critical goal in the healthcare sector. But a new study led by Yale University shows that the mathematical models currently available to predict treatments have limited effectiveness.

In analyzing clinical trials of multiple schizophrenia treatments, researchers found that mathematical algorithms were able to predict patient outcomes within the specific trials for which they were developed, but failed to work for patients participating in different trials.

The results were published January 11 in the journal Sciences.

“This study challenges the status quo of algorithm development and raises the bar for the future,” said Adam Shikrod, MD, associate professor of psychiatry at Yale University School of Medicine and corresponding author on this paper. “Right now, I would say we need to see the algorithms work in at least two different settings before we can really get excited about them.”

“I’m still optimistic, but as medical researchers we have some serious things to figure out,” he added.

Shagroud is also president and co-founder of Spring Health, a private company that provides mental health services.

Schizophrenia, a complex brain disorder that affects about 1% of the U.S. population, perfectly illustrates the need for more personalized treatments, researchers say. Up to 50% of patients diagnosed with schizophrenia fail to respond to the first antipsychotic drug they are prescribed, but it is impossible to predict which patients will respond to treatment and who will not.

Researchers hope that new technologies using machine learning and artificial intelligence will yield algorithms that better predict which treatments will work for different patients, help improve outcomes and reduce costs of care.

However, due to the high cost of conducting a clinical trial, most algorithms are only developed and tested using a single clinical trial. But the researchers hoped these algorithms would work if they were tested on patients with similar profiles and receiving similar treatments.

For the new study, Shukrud and his colleagues at Yale University wanted to know if this hope was actually true. To do this, they pooled data from five clinical trials of schizophrenia treatments made available through the Yale Open Data Access (YODA) project, which advocates and supports the responsible sharing of clinical research data.

In most cases, they found that the algorithms effectively predicted patient outcomes in the clinical trial in which they were developed. However, they fail to effectively predict the outcomes of schizophrenia patients treated in various clinical trials.

“The algorithms almost always worked the first time,” Shagroud said. “But when we tested it in patients from other trials, the predictive value was no greater than chance.”

The problem, according to Shagrod, is that most of the mathematical algorithms used by medical researchers were designed to be used on much larger data sets. Clinical trials are expensive and time-consuming, so studies usually enroll fewer than 1,000 patients.

Applying powerful AI tools to analyze these small data sets can often lead to “over-fitting,” where the model learns response patterns that are distinct, or specific only to the initial trial data, but disappear when additional new data is included, he said.

“The reality is that we need to think about developing algorithms in the same way we think about developing new drugs,” he said. “We need to see algorithms working in different times or contexts before we can really believe them.”

The researchers added that including other environmental variables in the future may or may not improve the algorithms’ success in analyzing clinical trial data. For example, does the patient use drugs or get personal support from family or friends? These are the types of factors that can affect treatment results.

Most clinical trials use precise criteria to improve the chances of success, such as guidelines for which patients should be included (or excluded), careful measurement of outcomes, and restrictions on the number of doctors providing treatment. At the same time, real-world settings have a much greater diversity of patients and greater variation in the quality and consistency of treatment, researchers say.

“In theory, clinical trials should be the easiest place for algorithms to work,” said co-author John Crystal, the Robert L. McNeil Jr. Professor of Translational Research. “But if algorithms can’t generalize from one clinical trial to another, their use in clinical practice will More difficult.” Psychiatry, Neuroscience, and Psychology at Yale University School of Medicine. Crystal is also Chair of the Department of Psychiatry at Yale University.

Checkrod notes that increased efforts to share data among researchers and store additional data by healthcare providers on a large scale may help increase the reliability and accuracy of AI-based algorithms.

“Although the study examined the experiences of schizophrenia, it raises difficult questions about personalized medicine more broadly, and its application in cardiovascular disease and cancer,” said Philip Corlett, assistant professor of psychiatry at Yale University and co-author of the study.

Other authors of the study at Yale include Hieronymus Loho; Ralitza Georgieva, a senior research scientist at the Yale School of Public Health; and Harlan M. Krumholz, the Harold H. Heinz Jr. Professor of Medicine (Cardiology) at Yale University.

About AI research and personalized medicine news

author: Bess Connolly
source: Yale
communication: Bess Connolly – Yale
picture: Image credited to Neuroscience News

Original search: Closed access.
“Illusory generalization of clinical prediction models” by Adam Shagrod et al. Sciences


a summary

Imaginary generalization of clinical prediction models

It is widely hoped that statistical models can improve decision-making regarding medical treatments. Given the cost and scarcity of medical outcomes data, this hope typically rests on researchers observing the model’s success in one or two data sets or clinical contexts.

We examined this optimism by examining how well the machine learning model performed across several independent clinical trials of antipsychotic medications for schizophrenia.

The models predicted patient outcomes with high accuracy within the trial in which the model was developed but performed no better than chance when applied out of sample. Pooling data across trials to predict outcomes in the excluded trial did not improve predictions.

These findings suggest that models predicting treatment outcomes in schizophrenia are highly context-dependent and may have limited generalizability.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *