In this article, Robert A. Dowling, MD, describes what urologists need to know about the Quality category of MIPS and nuances of the scoring methodology.
Editor’s note:Kate Elam, manager of quality reporting at IntrinsiQ Specialty Solutions, contributed to this article.
Robert A. Dowling, MDAs discussed in a recent article (“Value-based pay in 2017: Where does urology fit?"), most U.S. urologists will be subject to the Merit-Based Incentive Payment Program (MIPS) in performance year 2017/payment year 2019. For 2017, the Quality category in MIPS contributes 60% of the final weight to the MIPS composite score, and therefore provides the most opportunity/risk for eligible clinicians. In later years of MIPS, the Quality category drops to 30% weight.
Although the final rule contained several modifications that introduced “leniency” in this first year, many urologists want to optimize their performance in preparation for future years under MIPS. In this article, I will describe what urologists need to know about the Quality category and nuances of the scoring methodology.
The progenitor of the MIPS Quality category is the Physician Quality Reporting System (PQRS), and most of the available MIPS Quality measures are inherited from the PQRS program. Many physicians have been subject to payment adjustments under PQRS but have not thoroughly explored their performance on these measures until now. The key to understanding Quality measures and the contribution to scoring is the concept of benchmarking and grouping performance into deciles.
The Centers for Medicare & Medicaid Services has calculated separate benchmarks for each MIPS measure where there is reporting experience under the PQRS program in 2015; on Dec. 26, 2016, those benchmarks were released and are available online at bit.ly/qualityresources. If a measure can be reported by different submission methods, the benchmarks are calculated separately for each submission method (for example, claims, EHR, registry reporting). The benchmarks are calculated on a percentile basis and grouped into 10 deciles.
Under MIPS, your performance for 2017 will be matched to the benchmark decile and will determine the number of points achieved for that measure. While some measures have a “normal distribution” of benchmarks across those deciles, other measures have clustered benchmarks at the high or low end. Many of the oldest measures have very high performance benchmarks because physicians in PQRS have performed well with experience over many years; CMS defines these measures with a median benchmark over 95% as “topped out,” and this has significant implications for the MIPS Quality scoring. Finally, for new MIPS Quality measures, the benchmarks will be determined retrospectively once the 2017 performance data are available.
Next: Urology-related measures
Let’s look at three examples of measures that may be familiar to urologists to illustrate the impact of benchmarks: measure 102 (Prostate Cancer: Avoidance of Overuse of Bone Scan for Staging Low Risk Prostate Cancer Patients), measure 104 (Prostate Cancer: Adjuvant Hormonal Therapy for High Risk Prostate Cancer Patients), and measure 113 (Colorectal Cancer Screening).
In 2017, all eligible clinicians receive three points in the Quality category for reporting any data, so the following examples only show deciles 3-10. Points are awarded in increments of tenths spread across the respective decile benchmark.
Let’s postulate that an eligible clinician performs consistently across these three process measures at 80%. Because the benchmarks are very different, the eligible clinician would receive different points for each measure (table): 5.8 points for measure 102, 4.2 points for measure 104, and 9.9 points for measure 113 (EHR submission method). Measure 104 is “topped out” because the median performance is over 95%. A performance of 99.99% would only earn 7.9 points, and 10 points is only achievable with perfect performance. Also note that for measure 113, the bar is set very differently depending on the data submission method.
To optimize their Quality category score in 2017, an eligible clinician may report six measures, with a potential 10 points each, for a maximum total of 60 points. The preceding examples show that the choice of measures may significantly limit the potential to achieve a maximum score, and suggest that eligible clinicians should carefully choose which measures to report-and the submission method-in order to optimize their Quality category score.
Choices for eligible clinicians may be further limited by the current capabilities of most EHRs-especially specialty EHRs-where the cost of developing and certifying a Quality measure is significant; as a result, many EHRs offer only a small subset of the MIPS measures for electronic reporting (EHR or registry submission method). This is among the reasons that CMS created a transition year in 2017, and may yet modify the 2018 performance year requirements-many measures are not yet practically available to all eligible clinicians.
Bottom line: Performance in the Quality category of MIPS is the most important determinant of the MIPS composite score in the initial years of the program, and grading is done against published benchmarks based on actual past performance of eligible clinicians. Urologists should survey the Quality metrics available to them for submission, understand where the bar is set for each measure/submission method combination, and choose the measures and submission method that optimize their chance of success. The year 2017 offers any 90-day reporting period for quality measures, so there is still time to adjust strategy, maximize performance, and gain experience in the shift to value-based reimbursement.
Subscribe to Urology Times to get monthly news from the leading news source for urologists.