Which program is an incentive program for physicians and eligible clinicians that links payment to quality measures and cost saving goals?

  • Journal List
  • Rand Health Q
  • v.4[3]; 2014 Dec 30
  • PMC5161317

Rand Health Q. 2014 Dec 30; 4[3]: 9.

Published online 2014 Dec 30.

Findings from an Environmental Scan, Literature Review, and Expert Panel Discussions

Abstract

Value-based purchasing [VBP] refers to a broad set of performance-based payment strategies that link financial incentives to health care providers' performance on a set of defined measures in an effort to achieve better value. The U.S. Department of Health and Human Services is advancing the implementation of VBP across an array of health care settings in the Medicare program in response to requirements in the 2010 Patient Protection and Affordable Care Act, and policymakers are grappling with many decisions about how best to design and implement VBP programs so that they are successful in achieving stated goals.

This article summarizes the current state of knowledge about VBP based on a review of the published literature, a review of publicly available documentation from VBP programs, and discussions with an expert panel composed of VBP program sponsors, health care providers and health systems, and academic researchers with VBP evaluation expertise. Three types of VBP models were the focus of the review: [1] pay-for-performance programs, [2] accountable care organizations, and [3] bundled payment programs. The authors report on VBP program goals and what constitutes success; the evidence on the impact of these programs; factors that characterize high– and low–performing providers in VBP programs; the measures, incentive structures, and benchmarks used by VBP programs; evidence on spillover effects and unintended consequences; and gaps in the knowledge base.

Value-based purchasing [VBP] refers to a broad set of performance-based payment strategies that link financial incentives to providers' performance on a set of defined measures. Both public and private payers are using VBP strategies in an effort to drive improvements in quality and to slow the growth in health care spending. Nearly ten years ago, the Department of Health and Human Services [HHS] and the Centers for Medicare and Medicaid Services [CMS] began testing VBP models with their hospital pay-for-performance [P4P] demonstrations, known as the Premier Hospital Quality Incentive Demonstration [HQID] and the Physician Group Practice [PGP] Demonstration, which provided financial incentives to physician groups that performed well on quality and cost metrics. The use of financial incentives as a strategy to drive improvements in care dates back even further among private payers[2] and Medicaid programs, with limited experimentation occurring in the early 1990s; more widespread use of P4P began to pick up steam in the late 1990s and early 2000s.

Although the published evidence from P4P programs implemented by private-sector payers between 2000 and 2010 showed mostly modest results in improving performance,[3–10] public and private payers have continued to experiment with the use of financial incentives as a policy lever to drive improvements in care. Many of the early P4P program designs have evolved over time to include a larger and broader set of measures, including resource use and cost metrics, in an effort to reward providers for delivering value,* and many programs are deploying a wider range of incentives. Additionally, other VBP models have since emerged and are currently being tested, including accountable care organizations [ACOs] and bundled payment programs that include both quality and cost design features. VBP models are relatively new to the health system, and they represent a work in progress in terms of understanding how best to design these programs to achieve desired goals, the optimal conditions that support successful implementation, and provider response to the incentives.

Policy Context and Study Purpose

The Medicare program has gradually been moving toward implementing VBP across various care settings, starting with pay-for-reporting programs [e.g., the Hospital Inpatient Quality Reporting program and the Physician Quality Reporting Initiative] and P4P demonstrations to gain experience. The 2010 Patient Protection and Affordable Care Act[11] significantly expands VBP by requiring the Medicare program to implement, develop plans for, and test in the context of demonstrations the use of VBP across a broad set of providers and settings of care.

As HHS actively considers the federal government's near- and long-term strategy for how to design and implement VBP programs within the Medicare program, the department is seeking to apply the best available evidence to guide policymaking. Because of the substantial investments that HHS is making regarding VBP, it is an opportune moment to reflect on what has been learned from the past decade of experimentation that could guide current and future federal efforts. It is also a good time to consider the type of monitoring and systematic evaluation work that is needed to generate the information that policymakers require to fine-tune VBP program designs and to understand the impact these programs are having related to stated goals.

In 2012, the Office of the Assistant Secretary for Planning and Evaluation [ASPE] in HHS asked RAND to review what has been learned about VBP over the past decade that might help inform policymaking. The goal of the review was to understand whether VBP programs have been successful, what the elements of successful programs are, and the gaps in the knowledge base that need to be addressed to improve the design and functioning of VBP programs moving forward. This article summarizes the findings from RAND's review. We direct readers to the companion document to this summary report, Measuring Success in Health Care Value-Based Purchasing Programs: Summary and Recommendations.

Conceptual Framework for Assessing the Effects of Value-Based Purchasing Programs

To help us consider the research questions that ASPE asked RAND to address, we developed a conceptual framework for VBP. The model is adapted from a conceptual model by Dudley et al.[12] and includes three core elements that interplay and affect the response to VBP:

  • Program design features [i.e., measures, incentive structure, target of incentive, and quality improvement support/resources]

  • Characteristics of the providers and the settings in which they practice that may predispose them to a response

  • External factors [e.g., other payment policies, other quality initiatives, regulatory changes] that can enable or hinder provider response to the incentive.

The conceptual framework offers a foundation for considering the design features of the incentive program, as well as other mediating factors that influence whether and how providers may respond to the incentives and whether programs are successful in reaching stated goals. Largely, VBP programs are natural experiments, and the associated research is observational in nature. Dudley [2005] underscores that, as a result, it is critical that evaluators select theory-driven hypotheses about how incentives affect behavior to identify potential confounding factors that could explain observed effects.[13] Policymakers and researchers could use this framework to develop theory-driven hypotheses.

Study Approach

We defined VBP programs as private or public programs that link financial reimbursement to performance on measures of quality [i.e., structure, process, outcomes, access, and patient experience] and cost or resource use. We focused our review on three types of VBP models: [1] P4P, which includes both “pay for quality” and “pay for quality and resource use, efficiency, or costs”; [2] shared savings models that typically, but not exclusively, are being deployed in the context of ACOs; and [3] bundled payments for episodes of care [only when paired with holding providers accountable for performance on quality measures]. We excluded from review pay-for-reporting and demand-side programs [e.g., tiered networks and consumer incentives].

We define each of the three broad types of VBP models as follows:

  • Pay-for-performance refers to a payment arrangement in which providers are rewarded [bonuses] or penalized [reductions in payments] based on meeting pre-established targets or benchmarks for measures of quality and/or efficiency.

  • Accountable care organization refers to a health care organization composed of doctors, hospitals, and other health care providers who voluntarily come together to provide coordinated care and agree to be held accountable for the overall costs and quality of care for an assigned population of patients. The payment model ties provider reimbursements to performance on quality measures and reductions in the total cost of care. Under an ACO arrangement, providers in the ACO agree to take financial risk and are eligible for a share of the savings achieved through improved care delivery provided they achieve quality and spending targets negotiated between the ACO and the payer.

  • Bundled payments** are a method in which payments to health care providers are based on the expected costs for a clinically defined episode or bundle of related health care services. The payment arrangement includes financial and quality performance accountability for the episode of care.

ASPE identified 16 research questions that were the focus of this review, organized by three broad areas of inquiry: [1] measuring the performance of VBP programs; [2] the results of performance in VBP programs; and [3] improving the performance of VBP programs. We used three approaches to gather information to address the questions:

  • Environmental scan of existing value-based purchasing programs: We reviewed information that was publicly available for 129 VBP programs [91 P4P programs, 27 ACOs, and 11 bundled payment programs] sponsored by private health plans, regional collaboratives, Medicaid agencies or states, and the federal government. The VBP programs we reviewed do not represent the universe of all VBP programs in current operation in the United States, and the documentation for some programs we reviewed was not complete given the propriety nature of the information.

  • Review of the published evaluation literature on value-based purchasing: We examined the peer-reviewed published literature for studies that evaluated the impact of P4P, ACO, or VBP-type bundled payment programs.

  • Input from a technical expert panel: We convened a technical expert panel [TEP], composed of VBP program sponsors, providers from health systems who have been the target of VBP programs, and health services researchers with expertise in examining the effects of VBP programs, to help address many of the study questions where the literature was void of information. We provided the TEP with the findings from the environmental scan and the literature review as background information for the panel's discussions.[14]

Summary of Findings

We summarize the findings from the environmental scan of existing programs, the literature review, and our discussions with the TEP in an integrated manner. The findings are organized by the topic areas we were asked to address in the scope of work for this project. We direct readers of this summary to its companion report, Measuring Success in Health Care Value-Based Purchasing Programs: Summary and Recommendations, which provides a set of recommendations that emerged from our review and TEP discussions.

Goals of Value-Based Purchasing Programs

Based on our review of VBP programs in operation, VBP program sponsors tend to identify multiple high-level goals that focus on improving clinical quality [75 percent of the programs we reviewed] and cost/affordability [53 percent of the programs we reviewed]. Less commonly reported were goals related to improving patient outcomes [34 percent] and patient experience [17 percent]. There was some variation in goals among VBP program type, with goals focused on coordination of care and patient experience more prevalent in ACO and bundled payment programs as compared with P4P programs.

In most cases, the goals specified by VBP program sponsors were not quantified or measurable [e.g., “breakthrough improvement in quality” or “bend the cost curve”]. In a handful of cases [five of the 129 programs we reviewed], we found quantified goals related to desired cost savings [e.g., “keep 2010 health care premium costs flat” and “reduce the annual increase in cost of care by two percentage points”]. Our inability to find the specific performance goals for many of the VBP programs, particularly programs sponsored by private-sector payers, is likely a function of the proprietary nature of this information. Performance measures and thresholds are embedded within the contracts negotiated between providers [i.e., physicians, physician organizations, hospitals] and payers.

The absence of quantifiable goals for many programs makes it difficult to determine whether programs have been successful in meeting their goals; instead, evaluators and program sponsors typically examine whether performance on the incentivized measures improved over time. Given this difficulty, the TEP recommended that individual VBP program sponsors establish well-defined, measurable intermediate goals [i.e., program performance targets] derived from external benchmarks and use these to assess success.

Our discussions with the TEP also revealed support for VBP programs having broad goals, and panelists commented that beyond driving improvements in quality and costs, the larger goal of VBP is to transform the way care is delivered to enhance performance. TEP members outlined the following additional goals that they believed would be important to establish and potentially measure to assess VBP program success:

  • Stimulate organizational nimbleness to rapidly learn and improve in order to achieve a new performance target. TEP members indicated that a key goal of VBP is improving the functional capacity of providers to learn and improve. Therefore, it is important to understand whether there is capacity in health systems and provider organizations to improve quality against a moving target, and whether performance levels can be maintained once targets are achieved. TEP members commented that VBP programs should affect providers' willingness to change, their measurement capacity to identify problems, and their ability to respond to correct quality defects.

  • Promote innovation. The panelists commented that part of the value of VBP is the innovation that occurs to fix the fundamental problems leading to poor quality and outcomes within provider organizations and, ideally, across providers in response to the incentive scheme. Examples they cited were the creation of more integrated data systems to improve communication between providers, the development of care management protocols that span care settings to improve transitions in care between the hospitals and ambulatory settings, investments in registries that allow physicians to track and better manage high risk populations, the development and use of risk assessment tools, and provision of clinical decision support. There was interest among the TEP panelists in capturing whether and how VBP initiatives are stimulating innovation.

Although the TEP identified a desire to understand whether VBP is successful in helping to make providers “more nimble” and to “improve their functional capacity for learning and improvement,” it remains unclear at this stage what providers would need to demonstrate to prove that these aspirational goals had been met. To the extent that these are desired characteristics that VBP program sponsors want to encourage, work is required to define what is meant by these concepts so that VBP sponsors could determine whether this evolution has occurred.

The TEP also discussed whether success should be defined by levels [i.e., absolute performance achieved] or by the counterfactual [i.e., the extent of improvement in performance compared with what it would have been absent the VBP program]. A VBP program sponsor may consider a program successful if a certain level of performance is met, whereas researchers would consider a program successful if greater improvements in performance occurred for those providers exposed to VBP as compared with those who were not [i.e., the comparison group]. The latter perspective is important because quality may be improving broadly over time as a function of a variety of factors, such as quality improvement interventions and infrastructure improvements distinct from actions undertaken in response to the VBP program, so providers may reach the stated goals in the absence of a VBP program. This discussion highlighted important differences in what program sponsors, policymakers, and researchers are interested in evaluating and what defines success.

The VBP program sponsors on the TEP felt that study designs need to be adapted to fit with the needs for making policy change, such as more rapid but less rigorous initial evaluation cycles to guide decisions about fine-tuning program design. They cited the initial Premier HQID design, which was changed based on less rigorous evidence; the changes were needed to restructure the incentives to achieve more engagement from poorly performing hospitals.

Measures Included in Value-Based Purchasing Programs

Our review of public documents from VBP programs revealed there is a relatively narrow set of measures included in VBP programs that are used as the basis for differential payments. The measures vary somewhat by the health care settings in which they are being deployed as well as by the type of VBP model.*** Historically, P4P programs have focused on quality performance, while the newer VBP models [ACOs and bundled payments] incentivize providers for both cost and quality; however, P4P programs have been evolving over time to include more cost and use measures. P4P programs typically include measures of clinical process and intermediate outcomes [e.g., Healthcare Effectiveness Data and Information Set [HEDIS] or Joint Commission measures], patient safety measures [e.g., surgical infection prevention], utilization [generic prescribing, emergency department use, length of stay, ambulatory care sensitive hospital admissions], patient experience [i.e., Consumer Assessment of Healthcare Providers and Systems survey, Hospital Consumer Assessment of Healthcare Providers and Systems survey], and, to a more limited degree, outcomes [e.g., readmissions, mortality, complications, total cost of care or cost per episode] and structural elements [e.g., HIT adoption or meaningful use of HIT requirements for CMS incentive payments, National Committee for Quality Assurance certification or patient-centered medical home certification, staffing, inspections]. Clinical measures in the ambulatory setting focus heavily on preventive care and management of heart disease and diabetes, while in the hospital setting, the focus has been on heart attack, congestive heart failure [CHF], pneumonia, and surgical infection prevention.

The three ACO program models being tested by CMS use 33 measures, which include HEDIS clinical processes and intermediate outcomes; Consumer Assessment of Healthcare Providers and Systems survey questions on patient experience; all-cause hospital readmission; ambulatory sensitive care hospital admissions; patient safety; and electronic health record [EHR] functionality. Private-sector ACOs are using a similar set of measures, and again the clinical focus has been on three highly prevalent chronic conditions [i.e., heart disease, diabetes, and hypertension], cancer screening, and immunizations. The measures included in bundled payment programs tend to vary by the condition or procedure included in the episode as well as the setting[s] in which care is delivered. Cost measures are most commonly used. In the hospital setting, where most bundled payment programs occur, measures include clinical process, patient safety, readmissions, mortality, length of stay, and total cost of care. Some programs avoid tying physician compensation to outcome measures, so that physicians will not hesitate to treat patients who are more complicated. Little public information is available regarding the measures that are being used in ambulatory care bundled payment programs. Some of the VBP programs we reviewed are signaling that they intend to move to patient-reported outcomes in the next few years, but they are struggling to find market-ready measures that can be readily applied.

The discussions with the TEP highlighted problems with the narrow set of measures typically being used in VBP programs. The TEP estimated that only a small fraction [less than 20 percent] of all care that is delivered by providers is addressed by performance measures in VBP programs. An exception is “total cost of care” contracts [which as of late 2013 apply to only a small number of organizations] that hold providers accountable for the cost of all or most care delivered but which only measure quality performance for a fraction of all care delivered by providers. It was the panelists' opinion that the current, narrow set of measures tends to encourage providers to narrowly focus improvement efforts on the things that are measured [teaching to test] rather than wholesale improvement. The TEP also expressed concern that it is hard to demonstrate that VBP programs lead to performance improvements when the incentivized measures are the same set of measures that have been used for nearly a decade [i.e., Joint Commission measures, HEDIS]; many of these measures have less room for improvement and, in some cases, have topped out. Panelists commented that shifting measurement focus to areas where performance is lagging[15] would better address the question of whether VBP can improve the delivery of care in areas not previously the focus of reporting and incentives. With respect to what is measured, the TEP questioned whether VBP programs are addressing areas with the greatest impact on health. While medical care can influence health outcomes, the TEP observed that lifestyle behaviors [diet, exercise, smoking, etc.] contribute roughly 50 percent to determining health outcomes.

Another measurement challenge the TEP flagged was the inability to assess value because of the lack of an agreed-upon definition of value and that providers' lack of cost accounting systems that enable them to know the true cost of delivering care. Many organizations have struggled with how best to measure and convey value to providers and consumers, highlighting the need for measure development in this area. Although they did not offer a definition of value, the TEP members thought that a first step would be to achieve consensus on an overarching view of what value means; then VBP sponsors could develop value measures in the context of their own programs.

Many members of the TEP thought that a broad and more comprehensive set of measures in VBP programs would create incentives for providers to perform well across the board, rather than focus narrowly on a small number of areas, which promotes “teaching to the test”—that is, focusing only on improving areas that are measured and incentivized by the VBP program and ignoring clinically important areas that are not. However, neither the literature nor the TEP addressed how many measures are reasonable or practical to implement or when the data collection burden on providers becomes excessive. Expanding the set of measures included in VBP programs to more comprehensively assess care delivered and to include infrequently captured measure domains will require the development of new measures and new types of measures. Developing new measures is a time- and resource-intensive activity. Measurement concepts must be defined, specifications developed, data collection processes piloted, and data validated, among other steps. Recognizing this, the TEP recommended that it would be important to develop a framework to guide future directions about what to measure and, in turn, what measures need to be developed. They stated that the framework should address the multiple levels at which behavioral change needs to occur and where interventions should be directed [i.e., health system, institution, and individual provider].

The TEP identified several areas, discussed below, that should be the focus of future measure expansion work in the context of VBP.

Measuring Patient Outcomes and Functional Status

The TEP members agreed that the ultimate objective of VBP is to hold providers accountable for and financially incentivize provider performance primarily based on measures of health outcomes. CMS expressed that is moving toward increased accountability for outcomes in its hospital and physician VBP programs, and is seeking to find a balance of structure, process, and outcome measures in its programs. An example of this transition to outcomes is illustrated in the hospital VBP program. In the first year of hospital VBP, 70 percent of the measures were process measures, whereas in the second year the percentage drops to 30 percent, as currently outlined in CMS's proposed Notice of Rule Making.[16,17] Questions remain about the pace at which CMS should push toward outcomes measurement, the types of outcomes to use, and the consequences of those actions.

There was sentiment among the TEP members that functional status/health status is an important, feasible measure and that inclusion of these types of measures would shift VBP programs in the direction of incentivizing performance on outcomes. TEP members pointed to several health care settings and providers that are already measuring functional status on a regular basis: Medicare ACO programs are paid for reporting patient-reported functional limitations, and CMS collects health status information in nursing homes and home health agencies. The Dartmouth Institute is measuring quality-adjusted life years and has built functional status, which is considered a vital sign, into a provider order for life-sustaining care for patients who are at or near the end of life. Other provider representatives stated they are also measuring health status for some conditions. The TEP suggested that CMS could implement the Patient Reported Outcome Measures [PROMs], as the National Health Service in the United Kingdom has done, to measure the performance of hospitals regarding the functioning of patients undergoing selected procedures.

Measuring Appropriateness of Care

TEP members were supportive of including measures of appropriateness [i.e., overuse] in VBP programs, but panelists recognized that additional work is required to develop the definitions and engage providers in using these measures. They cautioned that without an external impetus, providers have little incentive to use practice guidelines or protocols that might withhold care due to the current fee-for-service and malpractice systems, which instead provide an incentive to increase the use of diagnostics and procedures. The TEP commented that providers under risk-sharing arrangements [e.g., ACO and total cost of care contracts] will be more likely to implement appropriateness guidelines, because the financial incentives they face are aligned with focusing on reducing the overuse of services that are not deemed appropriate. Based on direct experience, members of the TEP observed that when implementing appropriateness criteria measures in a health system, it can take years to get providers to buy-in related to establishing the criteria and being held accountable for performance against the criteria. TEP members suggested that measurement of shared decisionmaking is one of the keys to implementing appropriateness of care. A TEP representative of one health system noted the provider is piloting a process of “patient appropriate order entry” where the specialist has to attest that he or she held a discussion with the patient about the appropriateness of the care being recommended. Another TEP member recognized the challenge that physicians could face if appropriateness of care metrics are in conflict with patient preferences.[18]

Enhancing the Ability of Electronic Health Records to Support Performance Measurement and Improvement

There was widespread agreement among the TEP members that it is important to incentivize and help providers build the infrastructure for quality improvement. EHRs may facilitate measurement and improvement, but the TEP did not see this happening in the near term. Based on their experiences to date, the panelists expressed concern that most EHRs are far from including a comprehensive set of standardized data in data fields that can readily produce data needed to support the construction of performance measures, in part because providers who are the customers for EHRs are not demanding that EHRs be able to generate this type of information. Meaningful use requirements**** currently require that EHR vendors build functionalities in EHRs to support reporting from a select list of quality measures. This is very different than freeing up the EHR data for use by providers for their own performance monitoring, improvement, and broader performance measurement. For example, some delivery systems have EHRs and registries that give providers alerts at the point of care on the patients' status with respect to a given measure and/or that allow providers to benchmark their performance on measures against their peers. ASPE staff commented that ASPE is working with the Office of the National Coordination for Health Information Technology, which is the lead federal agency responsible for meaningful use requirements, to make EHRs function more effectively to facilitate automated capture and reporting of quality measures, but this will be a long process.

Types of Incentives

The review of public documents from program sponsors found that the types of financial incentives offered to providers have expanded beyond bonuses that have been commonly used in P4P programs, and which work at the margin, to a stronger set of incentives that more fundamentally alter payment arrangements. Examples include changes to fee schedules, shared savings arrangements [either alone or combined with bonuses or shared risk, in which the ACO loses money if targets for reducing patient costs are not met], and global budgets [i.e., overarching payment for all care delivered to a patient, similar to capitation]. Most of the ACOs reviewed in our environmental scan have shared savings arrangements, and a few have shared risk. VBP programs often use combinations of financial incentives to drive change. The Blue Cross Blue Shield of Massachusetts Alternative Quality Contract [AQC]—an ACO-type arrangement—allows for shared savings and shared risk and offers a bonus payment up to 10 percent above the global budget based on performance on quality measures. The majority of the bundled payment programs for which we were able to identify information are offering shared savings to providers, while others adjust the episode fee based on quality performance.

Although our review of the literature on VBP did not include a review of the use of consumer incentives, the TEP highlighted the importance of working to align incentives for consumers. Panelists commented that creating incentives to drive patients toward higher-performing providers could strengthen the impetus for providers to improve and might be more effective in shifting performance up than current P4P incentives that attempt to influence provider performance at the margin. CMS commented that it is already taking a number of actions in its VBP programs to affect consumer market behavior. For example, if a Medicare Advantage plan is consistently low-performing for three years, beneficiaries are not allowed to enroll online in that plan. Additionally, CMS sends letters to beneficiaries who are enrolled in low-performing Medicare Advantage plans and encourages them to shift to high-performing “five Star” plans; to facilitate plan switching, beneficiaries in low-performing contracts have the option of changing plans any time during the year. Panelists recommended that CMS continue to explore using tools like these to push quality improvement in a strategic way.

Type of Benchmarks/Thresholds

An important design element of any VBP program is the performance benchmarks or thresholds that are used to determine who will receive an incentive payment. In some cases, these are absolute, fixed benchmarks [e.g., provider must have at least 90 percent performance on mammography screening], while in other cases benchmarks are relative [e.g., the provider's performance must in in the top 20th percentile of performance], and as a result the absolute score required to reach the percentile cut-point changes year to year. Some VBP programs reward providers for attaining specific benchmarks, improving over time, or a combination of attainment and improvement.

We were only able to find information about the types of benchmarks used for a third of the VBP programs in our environmental scan. There was no publicly available information about the benchmarks being used by bundled payment programs. Among P4P programs, the most common benchmark used was an absolute threshold only, followed by relative thresholds only, which may be based on the performance of peers in the market, the state, or nationally. Other programs, such as the CMS Hospital VBP program, have two paths to earning incentives: attainment against an absolute threshold or showing improvement over time.

Very little information was publicly available about the types of benchmarks being used for ACO models, as these are developed in the context of private negotiations between payers and providers. The exception was the three CMS ACO demonstration models. In its shared savings programs, CMS is establishing the cost benchmark for each agreement period for each ACO using three-years-prior expenditure data. Quality benchmarks are based on national percentile rankings from the year prior, and points are assigned on a sliding scale based on the ACO's performance. For 2013, the Pioneer ACO program measures and rewards improvement on the quality measures. The Physician Group Practice demonstration, the precursor ACO demonstration that CMS ran, utilized absolute thresholds for quality measures.

The literature highlights some of the issues associated with use of different types of benchmarks. Providers report disliking relative thresholds,[19, 20] for several reasons. First, providers do not know ahead of time what actual level of performance is required to obtain the incentive payment, creating much uncertainty about whether their performance is “good enough.” Second, when topped-out measures are included in the VBP program, providers may have very high performance that does not meet the necessary threshold to receive the incentive, but yet is not meaningfully different from the performance of providers that do receive the incentive payment. For example, the initial design of the Premier HQID in Phase 1 of the program's implementation only paid hospitals that were in the top 20th percentile of performance. Performance rates for a large proportion of the hospitals hovered around 99 percent on a number of the measures, and which hospitals received the incentive payment was based on differences in performance at the second decimal point. In response to this problem, CMS changed the incentive structure in Phase 2 of the Premier HQID to reward above-average achievement and improvement.

A relative incentive structure can promote a “race to the top,” creating perverse incentives for providers to allocate resources to improvement on a measure that may not yield the greatest clinical benefit and which may lead to overtreatment of patients. Achieving 100 percent performance on a measure also may not be appropriate and may lead to overtreatment. No matter how well the performance measure is constructed, and despite attempts to exclude from the denominator patients who should be excluded, it is unlikely that any process measure will be applicable to 100 percent of the population. In practice, there are often sound reasons why some small percentage of patients does not receive recommended processes of care. These reasons include patient preferences regarding treatment, contraindications to recommended therapy [e.g., allergies or intolerance of medications], prior rare side effects, and the clinical challenges of balancing treatment of multiple clinical conditions and interactions between medications. Typically, the patients in the upper tail of the distribution differ from patients in the other 95 percent of the distribution in ways that performance measurement typically is not very good at systematically capturing through exclusion criteria. In these cases, not providing the recommended care is not an error in care. In the UK Quality Outcomes Framework P4P program, where providers are allowed to exclude patients from the measure calculation [i.e., exception reporting], a median of 5.3 percent of patients were excluded from performance measure calculations. Exception reporting occurred most often for performance measures related to providing treatments and achieving target levels of intermediate outcomes.[21] U.S.-based VBP programs do not typically allow providers to exclude patients from reporting.

TEP members noted that while establishing absolute attainment thresholds is preferred by providers, some payers express concern that this approach removes the motivation for providers to continue to improve once the threshold has been attained. Paying all who achieve an absolute attainment target also creates budgeting challenges for payers, who will not be able to estimate how many providers they will need to pay; if the payer sets a fixed incentive pool, the more providers who succeed results in a smaller incentive payment per provider. Some VBP sponsors have set multiple absolute targets along a continuum to motivate improvement at all levels of performance and to continue to motivate improvement at the top end of the performance distribution.

Performance of Value-Based Purchasing Programs

VBP program sponsors and evaluators have primarily assessed whether improvements have occurred in the measures that were incentivized through VBP. Efforts to disentangle the VBP effect from other interventions designed to improve the delivery of health care locally and nationally [e.g., investments in HIT, enhanced quality improvement, and public reporting] have proven more challenging to study, because the natural experiments typically lack robust comparison groups. Furthermore, contextual factors and how they may contribute to any observed impacts are rarely considered.

The TEP highlighted some of the challenges with evaluations conducted over the past decade: [1] the measures included in a VBP program are often also included in national performance measurement and public reporting programs [e.g., CMS] and the VBP programs by other private sponsors, making it difficult to tease out the effect of any individual VBP program; [2] the presence of other incentives [e.g., public reporting/transparency of performance results] make it difficult to isolate the effects on incentivized measures of the financial incentives; [3] there is usually no comparison population when a VBP program is implemented statewide or nationally; [4] the size of payment incentives is often small; [5] VBP programs typically have used the same core measures [i.e., HEDIS, Joint Commission measures] that have been used for more than a decade and are largely “topped out”; and [6] there is a substantial lag for the data required to assess impact, such as data on avoiding admissions and readmissions.

Clinical Quality

Pay-for-Performance

We identified 49 studies that examined the effect of P4P on process and intermediate outcome measures: 37 studies examined the effect of P4P on process measures for physicians or physician groups;[5, 8, 10, 22–52] 11 studies examined the effect of P4P on process measures in the hospital setting;[53–60] and a single study examined the effect of P4P on process measures in other care settings.[61] The published studies have focused on assessing a few large P4P interventions [e.g., the Premier demonstration, the Physician Group Practice demonstration, the Integrated Healthcare Association P4P program, the Blue Cross Hawaii P4P program, the Massachusetts multi-plan P4P program, the UK Quality Outcomes Framework P4P program, and more recently the Blue Cross Blue Shield of Massachusetts AQC] and a number of very small-scale incentive experiments that were of short duration.

Overall, the results of the studies were mixed, and studies with stronger methodological designs were less likely to identify significant improvements associated with the P4P programs. Any identified effects were relatively small. Studies with weaker study designs mostly found that P4P was significantly associated with higher levels of quality, and many reported substantial effect sizes.

Accountable Care Organizations

We identified six evaluations [of five distinct ACO programs] examining the effect on quality of care associated with implementing an ACO or ACO-like model [e.g., the Blue Cross Blue Shield of Massachusetts AQC, which is a global budget total cost of care contract, and the CMS Physician Group Practice demonstration, which was a precursor to the CMS ACO demonstrations]. Five of the studies investigated the effect of the ACO on a small number of process-of-care measures[62–66] and showed greater improvements than controls on some but not all of the measures. In addition to these evaluations, CMS issued a press release on the early experiences of the Medicare Pioneer ACO on July 16, 2013.[67] In the first performance year, the Pioneer ACOs had higher performance overall than the Medicare fee-for-service beneficiary comparison population on the 15 quality of care measures reported, but it was not reported whether the Pioneer ACOs had greater improvements or just higher baseline performance. At this stage, it is difficult to discern the effects of ACOs on quality, given the newness of the ACO model and the short period of implementation.

Bundled Payments

Of the three studies of bundled payments that include value-based payment design elements [cost and quality components], only one study examined the effect of bundled payments on process measures. The study found that adherence on 40 clinical process measures increased from 59 percent to 100 percent.[68] However, this study was conducted in a single integrated health system with unique characteristics that make generalizing the findings to other providers difficult. A recent systematic review of the bundled payment literature showed inconsistent effects on quality measures associated with implementing bundled payment arrangements. Most of the bundled payment programs reviewed in this study did not include quality elements as part of the incentive formula; in these instances, the evaluators sought to determine whether the application of bundled payments resulted in undesired effects on quality.[1]

Outcomes

We reviewed 21 studies that evaluated the effect of P4P on outcomes in physician groups [12], hospitals [6], and other settings [3]. In the physician practice setting, the studies generally focused on a small number of intermediate diabetes outcomes and found mixed results. Of the studies we rated as fair- and poor-quality in terms of their design, three[29, 33, 46] found between 2 and 22 percent improvement in the percentage of patients with HbA1c control, while another studies found no effect.[27] There was only a single study rated as good-quality,[69] and it found that changes in diabetes intermediate outcome measures [e.g., percent of patients with HbA1c and lipid control] were not statistically significant from the comparison group. Four studies focused on other types of health outcome measures. One good-quality study[70] found that a P4P program focused on prenatal care for pregnant members of a union health plan led to a reduction in admissions to the neonatal intensive care unit [NICU], but no reduction in low birth weight. Three fair- and poor-quality studies[24, 39, 50] found no effect on mortality, readmission, or incident of major health events [e.g., stroke or heart attack], but did find a slight reduction in initial hospitalizations.

The studies in the hospital setting focused primarily on measuring the effects on mortality. Three of the studies that focused on outcomes were deemed to be of good methodological quality and found mixed results. Glickman[53] found no evidence that in-hospital mortality improvements were incrementally greater at P4P hospitals in the CMS Premier HQID program, while Ryan[71] found no evidence that the HQID had a significant effect on risk adjusted 30-day mortality acute myocardial infarction, CHF, pneumonia, or coronary artery bypass graft [CABG]. Sutton et al.[72] found that risk-adjusted mortality for the conditions included in the P4P program decreased by 1.3 percent compared with controls in a study evaluating a program in the UK modeled after CMS HQID. Another study by Jha et al.,[73] which we deemed to be of fair quality, found no differences in a composite measure of 30-day mortality between hospitals in the HQID demonstration and hospitals exposed to pay-for-reporting. Mortality declined similarly across the two groups of hospitals [0.04 percent per quarter], and mortality rates were similar after six years of the pay-for-reporting demonstration. When considering the results from this study, it is important to note that hospitals exposed to the pay-for-reporting incentive increased their performance on the process measures similarly to pay-for-reporting hospitals, and both sets of hospitals topped out performance on these measures, so that there was no variation in performance to detect a differential effect.

One study,[74] which we rated as good, evaluated five states' Medicaid P4P programs in nursing homes and found that three of six outcome measures [the percentage of residents being physically restrained, in moderate to severe pain, and having developed pressure sores] improved a negligible amount, between 0.3 and 0.5 percent one year after P4P implementation. Performance on other targeted quality measures either did not change or worsened. Based on this study, it is unclear what the effects of P4P in the nursing home setting are. We also reviewed two studies that we deemed to be of fair quality. Hittle et al.[75] found that only two measures [improvement in pain interfering with activity and improvement in urinary incontinence], which were non-incentivized, showed significant differences between treatment and control home health agencies across one intervention year; otherwise, no differences were found in the incentivized measures. Shen[76] found that P4P was associated with a reduction in the proportion of clients in substance abuse clinics classified as most severely ill for three years post-intervention.

Among the studies evaluating ACOs, there is limited evidence that ACOs may reduce hospital readmission rates.[62, 63] Only one bundled payment study investigated the effect on health outcomes, and it found no effect.[68]

Costs

Pay-for-Performance

Few studies have investigated the impact of P4P on costs. The studies with the strongest study designs report mixed effects on costs in the physician or physician group setting.[40, 70] Two studies with weak designs[3, 39] found evidence of significant cost savings and a positive return on investment. We found only two studies that specifically investigated changes in costs in the hospital setting. Both of these studies were based on the HQID, and neither found any significant effects on hospital costs, revenues, margins or Medicare payments.[77, 78]

Accountable Care Organizations

All of the studies we reviewed attribute various degrees of cost savings for the shared savings payment model, but not all of the individual ACOs were able to generate statistically significant savings relative to controls.[65, 66, 62–64] CMS also reported that the costs for the Pioneer ACO beneficiaries increased 0.3 percent in 2012 compared with 0.8 percent growth for similar Medicare fee-for-service beneficiaries. While 13 of the 32 ACOs shared savings with CMS, two Pioneer ACOs had shared losses. Two Pioneer ACOs were leaving the ACO program, and an additional seven were switching to the Medicare Shared Savings Program, which involved less risk to providers. Because there were only six studies of four programs, the studies were of short duration, and several had poor or no comparison group, the evidence is insufficient to make conclusions about the impact of ACO payment structures on costs.

Bundled Payments

Of the two studies investigating the impact of bundled payments, both identified reductions in costs. One found a reduction in hospital charges of around five percent,[68] while another found a reduction in costs per case of roughly $2,000 over a two-year period.[79] The systematic review that documented the impact of implementation of 19 bundled payment programs[1] found that all programs showed declines of 10 percent or less in spending and utilization.

Unintended Effects

We examined undesired behaviors [often referred to as unintended consequences] and spillover effects to assess any unintended effects from these programs. Undesired effects include provider gaming of the data used to generate scores, ignoring other clinically important areas that are not measured and incentivized by the P4P program, avoiding sicker or more challenging patients when providing care, providing care that is not clinically recommended, and overtreating patients. Other undesired effects are an increase in disparities in treatment or outcomes among patients and the VBP program having harmful effects on providers who serve more challenging patient populations. Spillover effects occur when changes made to improve areas measured by VBP programs extend to other areas not included in the VBP program. The literature was sparse related to undesired and spillover effects; few studies have looked at the main effects of VBP interventions, let alone their side effects.

Pay-for-Performance

We identified 21 articles that examined undesired behaviors and spillover effects in P4P programs. Most of the published evidence regarding undesired effects related to application of P4P shows either small or no effects. However, recent studies in the Veteran's Administration found evidence of overtreatment of patients with hypertension and diabetes associated with use of intermediate outcome measures that use thresholds.[80–82] These authors have called for moving from the current class of dichotomous target measures [i.e., met or didn't meet a threshold such as HbA1c

Bài Viết Liên Quan

Chủ Đề