Feature

Article

AI-based tool aims to optimize MRI use in prostate cancer diagnosis

Fact checked by:

Key Takeaways

  • ProMT-ML, a machine learning model, predicts the necessity of prostate MRI, optimizing resource allocation and reducing wait times.
  • The model uses clinical data such as PSA, prostate volume, and BMI to assess the risk of abnormal MRI findings.
SHOW MORE

"I'm excited to see how these models are going to continue to be developed and applied in clinical practice, because precision medicine is here, and I’m looking forward to it," says Madhur Nayan, MD, PhD.

Although prostate MRI has a valuable role in prostate cancer detection, increasing demand has strained imaging capacity, leading to long wait times for patients that delays cancer diagnoses. In some cases, clinicians are forced to proceed with biopsy without prior MRI, which carries diagnostic trade-offs. This issue is particularly pronounced in under-resourced or rural settings, where limited access to imaging further compounds delays.

To overcome this challenge, investigators sought to develop a machine learning model that can help predict which patients would benefit from an MRI scan in a timely manner, and which patients may be able to delay or even forego scans altogether. Using data such as prostate-specific antigen (PSA), prostate volume, MRI history, and body mass index, the tool, called ProMT-ML, provides insights on the risk of a patient having an abnormal MRI, defined as a PI-RADS score of 3 or higher. These personalized insights can help clinicians and patients make informed decisions on next steps for each individual patient, prioritizing MRI utilization for those that are at higher risk.

Madhur Nayan, MD, PhD

Madhur Nayan, MD, PhD

In the following interview, senior author Madhur Nayan, MD, PhD, walks through key findings from the validation of the tool, which were recently presented at the American Urological Association Annual Meeting in Las Vegas, Nevada.1

Overall, the best performing model, ProMT-ML, showed an area under the curve of 0.750. The test also demonstrated a sensitivity of 86%, a specificity of 42%, a positive predictive value of 54%, and a negative predictive value of 79%.

Nayan, who is an assistant professor of urology and population health at NYU Langone Health in New York, New York, also details potential implications and next steps for integration of the model into routine clinical practice.

What is the background/rationale for conducting this study?

Prostate MRI has become a valuable test in prostate cancer diagnosis. It allows us to identify targetable lesions, which improves the cancer detection rate. It also allows us to reduce unnecessary biopsies in patients who have no visible abnormalities on MRI. Because it's become a valuable test, the demand has increased over the years, and this has resulted in prolonged wait times in some health systems. Some patients are waiting several months to get a prostate MRI, and this can delay diagnosis. In some patients where there’s uncertainty whether that delay is reasonable, they get a biopsy without an MRI, and that has issues as well.

That was the motivation for this study: to help develop a tool that can triage which patients should be getting an MRI more urgently and which patients may not need an MRI at all or can wait a bit longer. We took all of the electronic health record data system-wide at NYU Langone and built this prediction model using clinical, demographic, and disease-specific features and machine learning algorithms.

What were the key findings?

We developed 2 specific models, one that included prostate volume and one that did not, because this information is sometimes available to the clinician and sometimes it isn't. We found that both models outperformed PSA alone. That is expected because the models include additional information to PSA, such as age, prostate volume in one model, and body mass index in both models. The model that did not include prostate volume had systolic blood pressure. These [combined] features outperformed PSA in both models, not just the one that included prostate volume, both in terms of specificity and sensitivity.

We also looked at the potential clinical implications of when our model performed poorly. To be a bit more specific on that, we looked at patients [in whom] the model said they would have a normal prostate MRI, and then we looked at their biopsy findings. In the real clinical world, we wanted to see whether these patients may have bad disease. We found that about 90% of them had benign or clinically insignificant disease, suggesting that our model isn't missing any of the patients that have bad prostate cancer. That would be another thing that we'd want to look for prospectively in our model, not just in our retrospective dataset. What are the misses of our model? [That way,] when we deploy this more widely, it isn't going to have any big misses and harmful implications.

How might this tool help to just address some disparities in MRI access, particularly in under-resourced or rural settings?

One of the great things about our models is that they use routinely available clinical information, so no additional tests are needed. It's really accessible. It can be used online. The implications for these models is that a patient can come to a urologist that may be in a setting where prostate MRI is more limited, and plug in the information that is in front of them, such as their age, their body mass index, their PSA, and get a prediction of what is the estimated risk of having an abnormal prostate MRI. If a patient has a very high risk, then this is a patient that we want to get an MRI done sooner, so that we can plan their biopsy accordingly, whereas, if a patient has a very low predictive risk, one may question whether this patient needs an MRI at all, or perhaps they can get that MRI test done in a less urgent manner.

What future work is planned based on this study?

Our model has been built using NYU data alone. It's been internally validated in our test sample, but we really need to see how it does in generalizable populations and patients who were not included in this model development. We want to look at our model's performance in other datasets. If it performs well in other datasets, then we would want to trial this out into our health system and see if it's working prospectively. Is it collecting the appropriate patients for MRI? And in the patients that it is not predicting appropriately, what are the potential misses, and can we make the model even better based on that?

Is there anything else that you wanted to add?

We live in an exciting time where there's a lot of data available and machine learning prediction models that can put all of this data together and make meaningful predictions. I'm excited to see how these models are going to continue to be developed and applied in clinical practice, because precision medicine is here, and I’m looking forward to it.

REFERENCE

1. Persily JB, Chandarana H, Tong A, et al. Development of a machine learning model to triage the use of prostate MRI (PROMT-ML). J Urol. 2025;213(5S):e141. doi:10.1097/01.JU.0001109752.81014.18.02

Newsletter

Stay current with the latest urology news and practice-changing insights — sign up now for the essential updates every urologist needs.

Related Content
© 2025 MJH Life Sciences

All rights reserved.