News

Video

Dr. Vickers offers advice to urologists in reporting on observational studies

"If you've got a difference, pay attention to the size and direction. If you have a massive difference between groups in terms of outcome, but only a small difference in terms of baseline characteristics, that's worth talking about," says Andrew J. Vickers, PhD.

In this video, Andrew J. Vickers, PhD, discusses recommendations for urologists that were outlined in the publication, “Guidelines for reporting observational research in urology: The importance of clear reference to causality,” for which he served as the lead author. Vickers is the statistical editor for European Urology and an attending research methodologist at Memorial Sloan Kettering Cancer Center in New York, New York.

Video Transcript:

So, first of all, let's be aware of what a confounder is, because people use it very generally as anything that might mess up my scientific findings. That's not what a confounder is in epidemiology. In epidemiology, you have an exposure. So, a classic one would be smoking. But there could be a medical exposure, like you get radiotherapy rather than surgery for your prostate cancer. Then you have an outcome, whether it's getting lung cancer in the smoking example, or having metastatic disease in the example of treatments for localized prostate cancer. What you want to do is try and derive whether there is a causal association between the exposure and the outcome. A confounder is anything that gets in that pathway.

There used to be and maybe not so true anymore, but back in the day, you would often find an association between coffee drinking and lung cancer. But that's because people who drank coffee also tended to smoke more. So, it's a confounder. A confounder means that it's associated with both the exposure and the outcome. The first thing we want people to do is understand what a confounder is, what is meant by it, and use that. It's a technical term; use it properly.

The second thing is not to assume that statistical methods for dealing with confounders are somehow some type of magic wand. "We did propensity score methods, therefore, it's not an issue." "We did multivariable modeling, therefore, confounding cannot explain our findings." There's something called the e value, and it's, "Well, E-value is high, and therefore confounding is not a problem." There's nothing in the statistical toolbox that you can take out and do a clever bit of math to replace your scientific judgment in terms of whether unmeasured or residual confounding could explain your results. We want authors to be a little bit more sophisticated in the way they think about it.

For example, one of the things we want authors to do is look at your table 1. So, in my hypothetical study of radiotherapy versus surgery, were there differences at baseline between the patients who have surgery and the patients who got radiotherapy in terms of, we're talking about prostate cancer, so stage, grade, and PSA. If there are no differences, that's a good sign; it doesn't mean there's no confounding. If there are large differences, then we've got to worry about whether our statistical methods adequately controlled for them. Often what epidemiologists say is that differences in measured confounders imply differences in unmeasured confounders.

If you've got a difference, pay attention to the size and direction. If you have a massive difference between groups in terms of outcome, but only a small difference in terms of baseline characteristics, that's worth talking about. What about the direction? For example, if risk is higher in one group at baseline and that group has the better outcomes, then confounding is probably not a very good explanation. That's what we want people to think about.

The other thing we want people to think about a little bit more is effect size. Is the effect size plausible? I think, and many other commentators have pointed this out, that researchers get a little bit obsessed by the P-value. Smoking either is or is not associated with lung cancer, but the effect size can be informative. One study that I often discuss is a study that said there is an association between coffee and advanced prostate cancer, as coffee drinkers have a lower risk of advanced prostate cancer. That seemed to be yes or no; drink coffee to prevent prostate cancer. Well, the effect size was, roughly speaking, a 40% reduction in the risk of prostate cancer. I don't know anyone who thinks that drinking coffee is going to reduce your risk of advanced prostate cancer by 40%. I mean, tamoxifen for women at risk of breast cancer doesn't quite get to 40%. So, think about the effect size that you're getting, and then whether it makes sense.

These are some of the more sophisticated ways that we'd like investigators to think about this problem of confounding over and above, we used the multivariable model, we used propensity scores, we looked at the E-value.

This transcription has been edited for clarity.

Clinicians referring a patient to MSK can do so by visiting msk.org/refer, emailing referapatient@mskcc.org, or by calling 833-315-2722

Related Videos
Woman having telemedicine appointment with doctor | Image Credit: © Jacob Lund - stock.adobe.com
Daniel Carson, MD, MS, answers a question during a Zoom video interview
Benjamin Pockros, MD, MBA, answers a question during a Zoom video interview
Blur image of hospital corridor | Image Credit: © zephyr_p - stock.adobe.com
A. Lenore Ackerman, MD, PhD, answers a question during a Zoom video interview
Human kidney stones | Image Credit: © freshidea - stock.adobe.com
Hospital waiting room with reception counter at medical facility | Image Credit: © visoot - stock.adobe.com
Related Content
© 2024 MJH Life Sciences

All rights reserved.