The information presented in Oncology News Connection (ONC) eNewsletter is provided for physicians and other healthcare providers only and should not be shared with any current or future patients. If you are not a healthcare provider, we welcome you to sign up for Thrive eNewsletter to receive the latest information and blog highlights from Fox Chase Cancer Center.
Allocation of clinical resources in oncology depends on a physician’s ability to predict recurrence and survival events and rates. Currently, clinicians use a tumor’s stage, grade and type as primary determinants of recurrence.
Over the last decade, dozens of predictive models have been developed to improve on the ability of stage, grade and type to predict recurrence. Many are used in everyday practice to plan surveillance strategies, administer adjuvant therapy and develop eligibility for clinical trials. Unfortunately, many of the prediction models used by clinicians to guide patient care and clinical trial selection for solid tumors may not be as accurate as they are thought to be, according to a new study by researchers at Fox Chase Cancer Center.
“Medicine has come to rely on prognostic models and has put a lot of time and resources into developing them, but they are far less robust than we had hoped,” said study author Robert G. Uzzo, MD, MBA, FACS, Chair of the Department of Surgical Oncology at Fox Chase Cancer Center. “In some cases, using these models to predict future events is not much better than the flip of a coin.”
Uzzo and his colleagues recently conducted a study to validate the most widely used and currently accepted models for predicting whether patients with renal cell carcinoma (RCC) will have their disease return after treatment. These models have also been the basis for nearly all adjuvant clinical trials in kidney cancer in the last decade.
The findings of the research were recently published in a paper, “Predicting renal cancer recurrence: Defining limitations of existing prognostic models with prospective trial-based validation,” which was published in the Journal of Clinical Oncology (J Clin Oncol. 2019 Aug 10;37(23):2062-207).
Historically, clinicians relied primarily on a combination of stage, grade, and histology to predict oncologic events (known as the TNM system). More recently, Uzzo explained, several institutions have used retrospective data to develop prognostic models, which combine clinical and/or pathologic variables to help predict how long a patient may stay in remission (recurrence-free survival) or how long they might survive (overall survival).
“If a patient has just had surgery to have a cancer removed, common questions from the patient are, ‘Doctor, will this cancer come back?’ or ‘Am I at risk of dying from this cancer?’” Uzzo said. “Clinicians rely on these big data models to communicate these risks to the patient and help guide decisions on whether or not the patient should get additional therapy or be enrolled in a clinical trial.”
However, these models have a couple of weaknesses and limitations: they were developed prior to the existence of many of our successful systemic therapies for metastatic RCC and they primarily use institutional retrospective data.
In the study, Uzzo and his colleagues tested the accuracy of the eight most commonly used RCC models using prospective data from a large adjuvant kidney cancer trial (the ASSURE trial). The use of prospective data instead of retrospective data allowed for a centralized validation of the clinical and pathological variables inputted into the models and a standardized reporting of the outcomes measured.
Accuracy was measured using C-index, a measure of how well the model predicts what the researchers thought it would predict. If the C-index is 0.5, it is equivalent to the flip of a coin; if it is 1.0, it has a perfect correlation, Uzzo explained.
Among the eight models tested, the best performing (Mayo Clinic’s SSIGN model) had a C-index of 0.688. The worst performing model (UCLA’s UISS) had a C-index of 0.556. Most of the tested models only marginally outperformed the more traditional TNM staging system (C-index of 0.60).
“In the initial studies of these models, or in their subsequent validation studies, most had a predictive index somewhere around 0.8 or 0.9,” Uzzo said. “But our study showed indices closer to 0.6, which is not that much more predictive than TNM.”
Although this study examined RCC-specific models, these issues may not be unique to this disease and should be examined in other cancers using predictive models, such as breast, bladder and prostate cancer.
“Right now, these models are all we have to guide treatment and help decide on enrollment in a clinical trial,” Uzzo said. “But these data show that we should not rely only on these models and that they need to be more rigorously tested.”
Things to Know About Cancer Prediction Models
- Some widely used prediction models for RCC recurrence are not as accurate as previously believed.
- This is because the models use retrospective data and were developed prior to the development of many newer systemic therapies.
- A recent publication found that the eight most commonly used RCC models were only marginally better than traditional TNM, which uses disease stage, grade, and histology to predict oncologic events.
- The lower than expected predictive ability of the RCC models may not be unique to the disease. Predictive models for other cancer types (including breast, bladder and prostate) should be evaluated.
This work was supported by the National Cancer Institute of the National Institutes of Health under the following award numbers: CA180820, CA180794, CA180867, CA180858, CA180888, CA180863, and Canadian Cancer Society #704970.