Healthcare organizations must be economically viable to implement and maintain policies. Cost-benefit analysis measures viability and highlights wh
Healthcare organizations must be economically viable to implement and maintain policies. Cost-benefit analysis measures viability and highlights when or where changes need to be made to ensure financial sustainability. Medical City-Plano – examine the financial policy within that company that supports or negates sustainability (i.e., cost, benefit, and outcome).
The Policy Analysis Process: Evaluation of Economic Viability
Policy analysis involves the allocation of scarce resources. This chapter focuses on economic and financial aspects of the allocation process. Both affect the economic viability of a proposed policy alternative, which likely turns on the following questions:
• How much will it cost?
• What value will we be getting for the money?
• How does that value compare with other alternatives under consideration?
• If it is something we want to do, how will we pay for it?
To address these questions, the analysis team needs to undertake the tasks outlined in Figure 11-1. The team also needs to consider the points of view or interests of whoever commissioned the study. However, a fine line exists between pleasing the “customer” and maintaining a group’s professional integrity. One way to address this is by “changing hats.” The analysts can say, “When we put on Hat A, we get X, but when we put on Hat B, we get Y.” All of us wear many hats in health care, including patient, payer, parent, spouse, professional, and citizen.
images
Figure 11-1 Steps in the cost-benefit analysis (CBA)/cost-effective analysis (CEA) process.
The team can then proceed to subsequent steps. The first is to define the health issue that is to be addressed, including population, diagnosis, incidence, and impact, and then study the relevant intervention technologies. Then the group must agree on the effectiveness of the current and proposed interventions and, if necessary, conduct research to establish an acceptable range of effectiveness values for the analysis.
11.1 DEFINING THE HEALTH CARE PROCESS INVOLVED
The team should conduct a detailed process analysis to ensure agreement on how the intervention is delivered, especially if there is little field experience with it. It is often well worth the effort to step back and visualize how a new or modified process will work and how its detailed implementation will go forward. Otherwise, the team may be making a stab in the dark about the resources required.
The feedback arrow in Figure 11-1 indicates that sometimes the process analysis produces a revised estimate of the expected effectiveness as the team learns details about possible barriers to adoption and implementation. Many more feedback loops could be added because at any point in time the team can uncover a need to revise its earlier estimates.
11.2 AGREEING ON ITS EFFECTIVENESS
Clear evidence on the effectiveness of a proposed policy is a rarity. Much evidence is contestable even as to its science. Clinical trials may be limited, or in a few cases may not even be feasible. The populations involved in clinical trials and demonstrations may have been small or somewhat different from the one that will be affected by the policy. Often professional groups, having differing interests, cite studies that support their viewpoint and ignore those that do not. For example, in an analysis of folic acid supplementation, those who favored it cited its effects on neural tube defects (NTDs), whereas those opposed cited its ability to mask other metabolic deficiencies.
Where the evidence is unclear or disputed, one way to proceed is through sensitivity analysis. For example, after an analysis is done using the efficacy estimate that the analysis team thinks most valid, the calculations are repeated with alternative efficacy values to determine the range of values over which the conclusion holds. Many times the conclusion is not affected despite the heat generated by the differing estimates, but where the solution is sensitive to the choice of high-or low-end parameter values, it is necessary to make the decision makers aware of the applicable range. Perhaps they will authorize a study to narrow that range further. Sensitivity analysis is relevant to other variables besides efficacy, including costs, the population affected, and inflation rates.
The relevant change in effectiveness associated with an intervention is the marginal change. For example, in 2005, the Washington State Legislature directed the Washington State Institute for Public Policy to report on the benefits and costs of “evidence-based” approaches to the treatment of alcoholism, drug addiction, and mental illness. The institute performed a meta-analysis based on a review of 206 studies in the literature that met a specific set of criteria for quality of experimental design and measurement, such as use of a control group. This was an unusually complex analysis, because the legislature specifically mandated the study of the effects of treating individuals with substance abuse disorders and/or mental illness disorders in terms of their fiscal impact and “the long-run effects on statewide education, crime, child abuse and neglect, substance abuse, and economic outcomes.” There were already systems in place for dealing with these disorders, thus the researchers calculated the benefits based on marginal changes in costs and outcomes from expanding the existing services to provide evidence-based and consensus-based services to those not yet served. Few mental illnesses are currently cured, and many who stop abusing drugs and alcohol relapse over time. The study team had to estimate the reduced incidence or severity from implementing best practices. It estimated the number of people in the state with each disorder and subtracted out the number already receiving services. Then they assumed that about 50% of the untreated populations would accept services if they were made available. The analysts did not try to estimate the impact of having existing services move from their current modes of operation to the evidence-based approaches. Because the available studies were all short-term studies, the institute’s staff estimated a “decay rate” for each disorder to represent the loss of participants and program impact over time, and it also included a factor in the modeling for those individuals who would recover on their own without treatment.
The institute’s meta-analysis of the suitable studies concluded that the expansion of services would achieve a 15–22% reduction in incidence or severity of these disorders, resulting in a savings of $3.77 for each additional $1 invested. Taxpayers would see direct savings of $2.05 per additional $1 invested, or $416 million per year in net payer benefits, if fully implemented (Aos, Mayfield, & Yen, 2006).
11.3 AGREEING IN DETAIL ON THE DELIVERY SYSTEM INVOLVED
Analysis team members may have differing assumptions about how the intervention is to be delivered in the field. Reaching a common description of that process is an important early task. Discussing the process and drawing up a detailed process map are ways to get at that reality. Doing so leads directly to a description of the resources required. Team members may choose to revise their estimated effectiveness after the process is better defined and they better understand the problems of implementing the process in the field.
11.4 SELECTING THE ANALYTICAL APPROACH
A number of types of economic analysis can be performed. One key question will be, “What impact will a proposal have on the supply and demand for services?” Payment, access, and quality issues affect the perceived price and demand for services as well as their costs. Given an analysis of changing demand and supply, the team must decide how to analyze the most promising approaches. Rychlik (2002) suggested a hierarchy of analytical approaches for comparison and decision making:
• Establish the cost (burden) of the illness, usually including quality-of-life impacts of the problem.
• If the assessment shows little difference in impact from the relevant interventions or between the intervention and the status quo ante, conduct a cost-minimization study.
• If the comparison is among similar types of outcomes but there are significant differences in benefits and costs, conduct a cost-effectiveness study.
• If there are significant differences among the programs being considered, such that a common metric is necessary for benefits and costs, do a cost–benefit study.
• If quality of life after survival is an important parameter, then consider a cost-utility study, in which differing quality measures are compared using market research techniques such as conjoint analysis.
Sometimes public agencies focus on whether a proposed outlay is cost neutral or whether it is cost effective. To be cost neutral, the proposal must not increase the overall costs to the agency. To be cost-effective, the proposal must be the least costly method for reaching a predetermined
level of total benefits to the public. In the medical literature, however, the term cost-effectiveness analysis (CEA) has acquired its own specialized meaning as an analysis in which the benefits are measured in nonmonetary units, such as lives saved or quality-adjusted life-years (QALYs). Such analyses, however, do not allow for comparisons of proposals that express outcomes in different units.
At higher policy levels, health care investments must be compared with other public investments, including those outside the health sector. When a common metric must be used to value both costs and benefits of the full set of proposals being analyzed, it almost always turns out to be dollars. This is known as cost–benefit analysis (CBA). Private-sector organizations use the same techniques but often with a different terminology, employing terms such as return on investment (ROI) and internal rate of return (IRR) when they evaluate and compare investment opportunities.
Hacker (1997) suggested that concerns about health care as an economic marketplace emerged forcefully in the 1970s following the introduction of Medicare and Medicaid and the resulting cost inflation. With the nation’s access problems significantly addressed, government and industry turned toward issues of efficiency and effectiveness. This concern for efficiency and effectiveness in the federal government extended well beyond the health sector. The Bureau of Management and Budget issued Circular A-94, currently titled Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs, in 1972 (OMB, 1972). It was and still is intended for use across most federal government agencies and programs.
Since then there has been an explosion of studies and methodologies coming out of the subfield of pharmacoeconomics. These have been developed to meet the expectations of managed care organizations and government regulators for expanded justification for authorizing use of yet another new drug or device. Safety alone is less and less their only concern.
11.5 Basic Tools
Previous sectionNext section
11.5 BASIC TOOLS
The basic tools of economic analysis, including supply and demand analysis and benefit and cost analysis, are frequently bypassed by health care professionals because of measurement difficulties. These measurement problems pertain primarily to demand and benefit estimation, but also to costs.
Supply and Demand Concepts
Much of the health policy literature is concerned with aligning incentives properly through payment mechanisms, such as copayments, withholds,
discounts, and reimbursement rates. All of these really refer to changes in perceived prices and the effects these perceptions will have on the supply and demand for services. What really complicates health care is that some demand is generated by the consumer and some by the consumer’s agents, the health professionals.
The policy analysis team will have to estimate the impact of those perceived price changes on the activity levels that they can expect to see in the service system. These estimates are not easy, even where a program is budget constrained. Take, for example, the situation in which a program is budget constrained and a budget increase is proposed. Figure 11-2 illustrates a simple demand analysis relating to a budget constraint:
Given: Initial Budget = B0 = C0 × Q0
Where initial cost = C0, clients served = Q0
Increased Budget = B1, where Q1 > Q0 and B1 > B0
In panel A, the new larger budget is fully consumed, but no cost change is assumed (B1 = C0 × Q1, where Demand = D0 still is > Q1).
In panel B, the budget is increased to B2, but this enhanced availability of services exceeds the demand (D2) at the current perceived cost (Q1 > D1 > Q0), and the budget is underexpended (C0 × D2 < B2).
In panel C, the program management proposes responding by making the services more accessible (available more conveniently at more sites). This reduces the perceived cost of the service to the clients and the demand increases (D3 > D2), but at an added cost, which increases the average cost (C2 > C0). Given this situation, the program management and the policy analysts must make new estimates (C2, D3) and see whether the demand will be greater than, less than, or approximately equal to the new budgeted level of activity, which is B3 divided by C2. This will again determine the programmatic resources required.
Yes, this is complicated, but it is the way life goes.
If the prices charged are modified by a proposal, then the analyst must investigate supply and demand relationships further, including the following:
• The rate of change in demand with a given change in price (price elasticity)
• The rate of change in supply with a given change in price
Where data are available, these relationships can be estimated through regression analysis.
images
Figure 11-2 Supply and demand over time in a constrained budget setting.
Utilities and Preferences
Health policy analysts use the terms utility and preference somewhat interchangeably. The latter sounds admittedly subjective; however, public policy has no objective tool for measuring utility or for comparing utility across individuals. This creates an enormous challenge. Without the capacity to measure welfare, how do we maximize it? (Wheelan, 2011).
We also know that in health care people’s utilities differ based on their health status. One has to be very careful to articulate whether the utilities asserted come from the general population or from the population affected by the relevant diagnosis. It is also important to know what stage of the disease progression they are in, if they have the disease.
A number of methods are used to try to get at preferences and utilities, including:
• Direct measures:
• Standard gamble: What probability of success would be necessary to get you to choose a proposed outcome over the status quo or some other alternative(s)?
• Time trade-off: How many months of life would you be willing to give up to achieve the more desirable outcome?
• Rating scale (visual analogue scale): Pick a spot on a line from 0 (worst possible outcome) to 1 (best possible outcome) that represents each alternative being considered.
• Indirect measures:
• Generic utility instruments: Use value weightings set by the general public with off-the-shelf questionnaires. For example, in the U.K., NICE uses the EQ-5D instrument; in the United States, the FDA seems to favor the SF-6D. The Health Utilities Index (HUI), the Quality of Well-Being (QWB) scale, and the 15 dimension (15D) instrument also are used.
• Disease-specific instruments: The attributes and weights are tailored to the diagnosis and observed attributes.
• Mapping the attributes from a validated disease-specific instrument onto a generic instrument.
Each approach has its strengths and weaknesses. For example, the standard gamble approach is sensitive to a person’s risk tolerance. In addition, each presents its own problems in terms of comprehension and representation (Tolley, 2009).
Valuing Costs, Benefits, and Outcomes
Analysts crossing over from other sectors will find some very specific problems in applying their usual evaluation methods in health care. Effective cost and benefit analysis in health care often requires understanding the nomenclature and diagnostic coding systems used in this sector. Furthermore, market failure in this industry often makes it necessary to measure separately the consumer satisfaction and benefit/cost impacts of specific technological alternatives. Analysts can also expect to encounter a lack of cooperation because of fear of loss of autonomy, accounting systems biased toward revenue rather than cost finding, high levels of inherent variability, compartmentalization of information systems, and poorly aligned reward systems.
The role of benefit–cost analysis in health care was investigated thoroughly in the 1960s and 1970s (Baker, Sheldon, & McLaughlin, 1970; Bunker, Barnes, & Mosteller, 1979; Office of Technology Assessment, 1980; Weinstein & Stason, 1977). During that period, the problems of benefit measurement seemed so insurmountable that most health care professionals doing analysis preferred to rely on CEA. Weinstein and Stason (1977) described the difference as follows:
The key distinction is that a benefit–cost analysis must value all outcomes in economic (e.g., dollar) terms, including lives or years of life and morbidity, whereas a cost-effectiveness analysis serves to place priorities on alternative expenditures without requiring that the dollar value of life and health be presented. (p. 717)
The basic problem is not one of using dollars, however, but one of expressing all of the relevant factors in any single metric. The alternative approach is to express the outcomes as a vector, but because one alternative vector seldom dominates the other, one must still deal with trade-offs among variables. The vector representation gets one into all the complexities of multidimensional scaling.
Further problems arise from the following:
• Determining the relevant costs, especially supply and demand estimation and resulting price levels.
• Incorporating values of nonmedical outcomes, including the way benefits and costs are distributed.
Pauly (1995) suggested that CBA and CEA are used because the better normative measure, willingness to pay, is hard to assess in the real world. He defined a personal benefit as an informed individual’s willingness to pay for a program, whereas a programmatic benefit is the sum of the willingness to pay of all informed persons affected by the program, including those making altruistic contributions but not directly affected by the service process. In a few situations, willingness to pay can be imputed from what individuals are paying for insurance against an event or to mitigate the risk of an event, such as installing seat belts or highway
crash barriers. As Pauly (1995) noted, “Such concepts as addition to measured gross national product associated with a health program, the additional wages to beneficiaries and providers, or addition from investment now and in the future have validity only to the extent that they proxy willingness to pay” (p. 103).
WHOSE WILLINGNESS TO PAY?
A study to determine the need for a third London airport, as well as its location, found that a preferred location would displace a 12th-century Norman church that was still in use. One group contended that the willingness-to-pay valuation of the church should be based on the value the current parishioners were insuring the building for against fire. A second group, however, argued that the Normans had incurred an opportunity cost for the last 8 centuries for the £100 that they invested to build it. They had foregone the opportunity to loan the money to the local usurers at a reasonable rate of interest; therefore, the church should be valued at the willingness to pay of the original parishioners, which would lead to a valuation of £100 plus compound interest for more than 800 years. Using this calculation, it would be worth more than the construction cost of the entire new airport.
A proxy is something that stands in for the real thing. In one analysis, for example, the costs of the early loss of a mother because of breast cancer were estimated by the value of replacement family care services plus a proxy for the emotional losses. The proxy chosen was the estimated cost of the amount of psychotherapy used by those who lost a mother early in life (Bunker et al., 1979).
Indicators also are used to substitute for direct measures. “Good indicators are easily measurable and highly correlated with the underlying variable of interest, which is usually impossible to measure” (Wheelan, 2011,
p. 145). We cannot agree whether a population is healthy or not, but we often use measures to indicate success or failure, such as visits to the emergency room or hospitalizations.
Pauly (1995) opposed two other approaches often cited in the literature: (1) the human capital approach, which emphasizes the economic cost to society, such as a worker’s daily wage multiplied by the number of work days lost, or some other measure of lost productivity (assuming full employment), or (2) the friction cost approach, which measures the loss in productivity until the system resumes full productivity with a trained and experienced new worker or with the ill worker restored to full capacity.
Benefit-Cost Concepts
Where there are multiple choices, the rational person will select one or more alternatives that maximize his or her satisfaction, or what an economist would call the person’s utility; however, our utilities are specific, if not unique, to each of us. Although individual utilities are cumbersome to capture, aggregating the utilities of a population presents far greater problems. Thus,
decisions that involve more than one person usually require a common measure. Most analyses are based on aggregating all of the costs and benefits to individuals regardless of whether their utilities are typical and to whom or from whom they accrue. That is why so many studies end up choosing dollars; however, not all agree on that. Whatever the metric chosen, one ends up with a ratio of benefits to costs, and the higher that ratio the better an alternative. In health care, however, we must also consider to whom these benefits and costs accrue.
Circular No. A-94 defines cost-effectiveness as “a systematic quantitative method for comparing costs of alternative means of achieving the same stream of benefits or a given objective.” In other words, the economist would say no to the request to “get me the most for the least money,” because it is a mathematical impossibility. The two feasible formulations are:
• Get me the most benefit for a given sum of money (i.e., maximize my benefit–cost ratio).
• Get me a given benefit package at the lowest cost (i.e., minimize my effectiveness–cost ratio).
Analysts often retreat to their previously prepared position of trying to produce a set of benefits defined by the politicians at the least possible cost and then labeling the results cost-effectiveness, even though it is really cost-minimization. That leaves the hardest part, the valuation of benefits, up to the political process. At higher levels of government where the trade-offs are between noncomparable benefits such as health care, highways, police protection, and recreational services, the only comparable means of comparison usually turns out to be money. The analyst has to be clear which is called for and be consistent in reporting the results. Pauly (1995) suggested that where money is used to measure benefits and (1) there is a fixed budget and (2) there is little variation in the preferences for outcomes, then cost-effectiveness analysis should be used, but where there is a variable budget and varying utilities of outcomes, cost-effectiveness is “much less suitable, in theory than cost–benefit analysis” (p. 111).
At this point, the analysis splits into two streams. One stream estimates the costs, whereas the other values the outcome. This chapter looks next at the cost side, which is the easier path to consider.
11.6 AGREEING ON THE RESOURCES REQUIRED
All too often the analysis team begins by talking about monetary costs. This is the wrong place to start in a cost analysis. When trying to compute the costs of a wedding reception, few people would start with a dollar figure per guest; instead, most would consider estimates of the number of guests, the menu for food and drink, the portions offered, the number of helpings per person, and the staffing needed. After these are defined, it is a simple matter to determine the costs by multiplying these resources by their market prices and totaling them up. That gives us an estimate of the total variable cost of the reception. Then there are the fixed costs of the reception, such as the chef and hiring the hall. However, the kitchen staff only has a certain capacity, and if the guest list exceeds a certain number, the staff would have to be augmented;
therefore, many fixed costs apply only over a specific volume range. These are sometimes called step-variable costs or semi-fixed costs.
After we have the cost of our ideal menu and level of hospitality, it is time to figure out whether it falls within the acceptable budget range. Chances are it does not, and we would have to agree to spend more on the wedding than we had planned, cut some costs out of the reception, or cut down on some other aspect of the reception.
.7 Determining Relevant Costs
Previous sectionNext section
11.7 DETERMINING RELEVANT COSTS
Relevant costs are those affected by the decision being considered. There are two methods of estimating costs: aggregate costs and marginal costs. One arrives at aggregated costs by taking the total costs of a division,
department, or other organizational unit and then dividing it by the number of service units or products produced. This figure is usually relatively easy to produce from existing departmental cost data, but using this method is not recommended. It does not take into account how processes, and hence costs, change with volume, nor does it include relevant costs that occur outside of the given organizational unit. Relevant costing ignores those costs that are not affected by a decision, including those that are real but fixed. For example, the comparison of two treatments for pneumonia would not include the costs of diagnostic tests unless different test protocols were associated with the new treatment regimen.
Relevant costs for the two treatments are likely to include the following:
• Changes in costs of medicines, consumable supplies, and tests caused by the introduction of the alternative method
• Changed labor costs, including physicians, nurses, and pharmacists, and ancillary services
• Costs altered by the changes in length of stay or location of treatment
• Changes in costs incurred by the patient and the patient’s family, including access costs and lost income, if any
• Changes in overhead costs associated with the new alternative including amortization of new specialized equipment and altered space requirements
Hospital costs represent an especially difficult problem because so many costs are lumped into the overhead cost categories and then allocated to the various operating departments.
A process-based cost study is usually a must in the hospital setting. The usual hospital cost reports are so loaded with fixed costs that it is necessary to map out the process, directly identify the resource inputs required, and then price them.
Marginal/Incremental Cost Concepts
Where demand is changing, the appropriate cost is not the average cost of the service, but the cost of adding the next additional service unit (the marginal or incremental unit) or subtracting it. Even though pricing, and hence revenue, may be related to the average cost of a unit of service, the cost of changing output is not. If I am the dean of a medical school and am asked to add 10 more students, I may find that my costs of the preclinical lectures are nil, because there are extra seats in the lecture hall. However, during the clinical years I may find that I have to divert clinical faculty from clinic hours to rounding with students at considerable opportunity cost of revenue. I will also have to add desks in the anatomy lab and secure more cadavers. These incremental costs may be very different from what one gets by dividing total existing costs by the current number of students.
Handling Inherent Process Uncertainty
Earlier chapters dealt with the technological and political uncertainties of producing a desired outcome. The analysis team must decide how to deal with those uncertainties in its economic and financial analysis. Sometimes the uncertainty is handled with multiple analyses up front, a sort of branching in the analysis; however, the usual way of handling uncertainties is through sensitivity analysis in which the inputs of uncertain parameters are allowed to take on a realistic range of values to define the range over which the analysis is sensitive to that parameter.
For example, in the Washington State study of evidence-based treatment of substance abuse and mental illness disorders, the Washington State Institute for Public Policy conducted a sensitivity analysis with many different parameters using a Monte Carlo simulation technique. Researchers assigned a probability distribution to each range of values and ran the simulation model 10,000 times with the values of each variable sampled randomly from its distribution. This process indicated that there was only a 1% probability that the investment would provide a negative return to the taxpayers. This was very important to the credibility of the analysis, because so many of the measures and variables were so difficult to define and to measure (Aos et al., 2006).
Much of the value of a simulation would be to identify the interaction of various factors as a policy is implemented. For example, we have seen a number of governors prepared to ensure access to health care for virtually all their state’s population; however, very little has been said about what will happen to health care demand and supply and the resulting prices. Given the rapid rise in prices following the starts of Medicare and Medicaid in 1965, this has to be a matter for concern given the access-expanding provisions of the Affordable Care Act (ACA).
Figure 11-3 illustrates one model that might be used to simulate the effects of increased access. As demand increases, so do the direct costs of services and the capital costs of providing the necessary delivery infrastructure, especially supplying sufficient primary care providers (PCPs), nurses, and community-based services; however, these changes will not take place in a vacuum. Policy variables can be manipulated to affect those costs, including the new covered service definitions, the management and organization of the new efforts, the amount of waste and medical error experienced, the financing and incentives of the program, and whether middlemen are used and what their margins will be. This again would seem to call for a simulation model to assess the overall impact of t
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.
