Article

Study: Quality of reviews, abstracts declining

Systematic reviews and abstracts for randomized controlled trials (RCTs) appearing in the urologic literature are generally of less than optimal quality, say researchers from the University of Florida College of Medicine, Gainesville.

Gainesville, FL-Systematic reviews and abstracts for randomized controlled trials (RCTs) appearing in the urologic literature are generally of less than optimal quality, and these findings are troubling considering the role these publications play in informing clinical decision making, say researchers from the University of Florida College of Medicine, Gainesville.

"Results from studies we’ve conducted underscore a need to improve the quality standards for conducting and reporting urologic research," said senior author Philipp Dahm, MD, associate professor of urology and director of clinical research. "This will require efforts by researchers who perform systematic reviews, editors of journals that publish these reports, and meeting organizers who establish the guidelines for structured abstracts."

Based on an impression that systematic reviews were appearing more frequently in the urologic literature and recognizing how these analyses can influence evidence-based clinical practice, Dr. Dahm and colleagues undertook an investigation of the reviews’ methodologic quality. They focused on four leading peer-reviewed journals: the Journal of Urology, European Urology, Urology, and BJU International, and searched for systematic reviews published between 1998 and 2007.

Articles were identified using a comprehensive search strategy and were critically appraised by two independent reviewers using a standardized data abstraction form based on the Assessment of Multiple Systematic Reviews (AMSTAR) tool. AMSTAR is a validated rating instrument that considers multiple methodologic dimensions, including literature searching techniques, the inclusion/exclusion criteria for study selection, study quality assessment, and conflicts of interest. Possible scores range from 0 (lowest quality) to 11 (highest quality), and the rating instrument was first pilot tested.

Declining quality of reviews

A total of 57 systematic reviews were analyzed. With the study period divided into three intervals (1998-2001, 2002-‘05, 2006-‘08), the data confirmed that the number of published systematic reviews was increasing; 10 appeared in journals published during the first 4-year interval, while there were 27 systematic reviews in the four journals during the last 3-year period of the study.

However, many of the papers failed to meet established criteria for execution and reporting, and the quality appeared to be declining, the researchers reported at the 2009 AUA annual meeting in Chicago. The average AMSTAR rating was only 4.8, indicating that the average study met fewer than half of the quality criteria, and the average score decreased from 5.4 for studies published in 1998-2001 to 4.7 for the reviews published in the more recent time period. Some of the most common deficits were related to inclusion of studies irrespective of their methodologic quality and use of search strategies that were not appropriately comprehensive.

"We also compared the quality of systematic reviews conducted by authors having some affiliation with the Cochrane Collaboration against those by unaffiliated authors and, not surprisingly, found the former offered better methodological quality based on a two-point higher average score," Dr. Dahm told Urology Times.

RCTs lacking in information

Dr. Dahm and colleagues also reviewed abstracts for RCTs from the 2002 and 2003 AUA annual meetings and found that the majority failed to provide information necessary to assess methodologic quality. This analysis included 126 reports that were assessed by two independent reviewers using a pilot-tested instrument based on the Consolidated Standards for the Reporting of Trials (CONSORT) statement.

The appraisal revealed that all of the abstracts were missing information considered essential according to the CONSORT statement, including failure to describe how the randomization was done and whether the randomization was concealed. Although about three-fourths of the abstracts reported the RCT was blinded, the information on blinding was usually not explicit enough for the reader to determine who in the study (ie, patient, investigator, outcomes assessor) was unaware of the treatment received by each participant, Dr. Dahm explained.

"RCTs are considered to provide the highest level of evidence for clinical decision making, but many such studies are published only in abstract form and never appear as full-text articles. It is critically important that these abstracts provide readers the information needed to assess the study’s validity and results in order to determine their applicability to clinical practice," Dr. Dahm said.

Related Videos
Blur image of hospital corridor | Image Credit: © zephyr_p - stock.adobe.com
Justin Dubin, MD, answers a question during a video interview
Phillip M. Pierorazio, MD, answers a question during a video interview
Michael Jenson, PA-C, answers a question during a Zoom video interview
Prostate cancer cells | Image Credit: © Dr_Microbe - stock.adobe.com
Related Content
© 2024 MJH Life Sciences

All rights reserved.