Evaluating Human Service Programs If you were a program director or administrator, please discuss what steps would you take to build evaluation into ongoing proce
Discussion Post 10-Evaluating Human Service Programs
If you were a program director or administrator, please discuss what steps would you take to build evaluation into ongoing processes?
Please make sure your post includes references and citations in APA Style.
(Book ) Management of Human Service Programs
Judith A. Lewis
CHAPTER 10 EVALUATING HUMAN SERVICE PROGRAM
In the context of current public policy debates about the value of human service programs and resource challenges in the human services field, program evaluation can be expected to become increasingly relevant (Carman & Fredericks, 2008). Ultimately, evaluation is needed to let us know whether services have taken place as expected and whether they have accomplished what they were meant to accomplish. This kind of information can provide the basis for making sensible decisions concerning current or projected programs. Program evaluation can be defined as: The systematic collection of information about the activities, characteristics, and results of programs to make judgments about the program, improve or further develop program effectiveness, inform decisions about future programming, and/or increase understand- ing. (Patton, 2008, p. 39) Patton, who is particularly interested in the utilization of evaluation findings, adds that “utilization-focused program evaluation (italics in original) is evaluation done for and with specific intended primary users for specific, intended uses” (2008, p. 39). Human service evaluation, if it is to be of value, must be seen as an integral part of the management cycle and must be closely connected to ongoing management processes and daily activities. Its results must be disseminated to and understood by the people most concerned with program functioning, including community members, funding sources, and service providers, as well as administrators. It can be practical only if individuals who influence service planning and delivery see it as useful.
PURPOSES OF EVALUATION Evaluators use research techniques, applying them to the needs and questions of specific agencies and stakeholders. Evaluation can be used to aid in administrative decision making, improve currently operating programs, provide for accountability, build increased support for effective programs, and add to the knowledge base of the human services.
ADMINISTRATIVE DECISION MAKING Evaluation can provide information about activities being carried out by the agency as well as data describing the effects of these activities on clients. Information about current activities can help decision makers deal with immediate issues concerning resource allocations, staffing patterns, and provision of services to individual clients or target populations. At the same time, data concerning the outcomes of services can lead the way toward more rational decisions about continuation or expansion of effective programs and modification or elimination of less effective ones. Decisions concerning the development of new programs or the selection of alternate forms of service can also be made, not necessarily on the basis of evaluation alone but with evaluative data making a significant contribution.
IMPROVEMENT OF CURRENT PROGRAMS An evaluation can be used to compare programs with the standards and criteria developed during the planning and program design stages. Evaluation can serve as a tool to improve program quality if it provides data that help contrast current operations or conditions with objectives and evidence-based standards. Activities performed as part of an agency’s operations can be compared or contrasted with standardized norms, such as professional or legal mandates, or with the agency’s own plans, policies, and guidelines. Evaluation of service outcomes means that results can be compared with identified community needs, leading to an assessment of the program’s adequacy. Data collection technologies such as employee attitude surveys, management audits, quality audits, cultural competency assessments, and ethics audits can provide information that is very useful in improving program or agency operations. With systematically collected data on hand, agency personnel can make improvements either in the nature of the services or in the ways they are delivered. Although evaluation does not necessarily identify the direction an agency should take, its systematic application does point out discrepancies between current and planned situations. Without it, quality cannot be improved.
ACCOUNTABILITY Most human service programs are required to submit yearly evaluation or moni- toring reports for funding sources or public agencies, and many specially funded projects are required to spend set percentages of their budgets on evaluation. Agencies are accountable not only to funding organizations but also to the clients and communities they serve and even society as a whole. Since the 1990s an “accountability movement” in government and not-for-profit organizations has focused increased attention on adherence to laws and regulations and responsible stewardship of finances as well as effective and ethical implementation of pro- grams. The latter concern, especially regarding the accomplishment of desired program outcomes, is addressed through evaluation. According to Schorr (1997), in the past “outcomes accountability” and evalua- tion were separate activities, the former the province of administrators and auditors and the latter of social scientists. Now, however, “the accountability world is mov- ing from monitoring processes to monitoring results. The evaluation world is being demystified, its techniques becoming more collaborative, its applicability broad- ened, and its data no longer closely held as if by a hostile foreign power” (p. 138). Dissemination of evaluation reports describing the agency’s activities and their effects can help reinforce program accountability. People concerned with agency performance can gain knowledge about the results of services, and this information undoubtedly increases community members’ influence on policies and programs.
BUILDING INCREASED SUPPORT Evaluation can also enhance an agency’s position by providing the means for demonstrating—even publicizing—an agency’s effectiveness. One realistically fears that evaluation results may show that a program has no effects or negative effects, an issue that is addressed later in this chapter. This is a legitimate concern, but a responsible agency administrator will welcome evaluation results that could help improve programs, as well as positive results that could showcase the accomplishments of programs. Evaluation provides information that helps the agency gain political support and community involvement. Evaluative data and analyses can enhance the agency’s well-being if they are disseminated to potential supporters and funding sources as well as to agency staff.
ACQUIRING KNOWLEDGE REGARDING SERVICE METHODS Much of what is termed “program evaluation” has historically consisted of routine monitoring of agency activities. This approach is still common, but fortunately increasing attention is being paid to the assessment of program outcomes. Additionally, program evaluation methods can be used to develop knowledge about the relationships between interventions and desired outcomes. New knowledge regarding program effectiveness has historically been associated with experimental research— typically randomized control trials, which will be briefly mentioned later—testing the efficacy (done in a controlled setting such as a lab) and then effectiveness (done in a program setting). Controlled experiments help determine whether clearly defined program tech- nologies can lead to measurable client changes. Although such research-oriented studies rarely take place in small agencies with limited resources, they do play a major part in establishing the effectiveness of innovative approaches. Program designers need to be able to make judgments concerning the effects of specific services. Knowledge concerning such cause-and-effect relationships can be gained through reviewing research completed in other settings, carrying out ongoing internal evaluations, and utilizing the services of researchers to implement special studies of program innovations. More feasible than controlled experiments, given limited resources for large-scale experimental designs, are recent developments in evidence-based practice and best practices benchmarking. Evidence-based practice and best practices benchmarking were discussed in Chapter 3 as important aspects of program design. They are mentioned here to emphasize the importance of explicit documentation of program operations to enable organizational learning and knowledge development.
PRODUCERS AND CONSUMERS OF EVALUATIONS An elegantly designed evaluative research study is of little use if the people who have a stake in an agency’s efforts do not recognize it as important. Evaluation efforts involve a number of groups, referred to here as stakeholders, including not only professional evaluators but also funding sources, administrators and policy makers, service providers, clients or consumers, and community members. These groups serve as both producers and consumers of evaluations, and they can have tremendous influence on the process if they see themselves as owning it. Patton’s (2008) utilization-focused evaluation principles and methods have been shown to be very useful in designing and implementing evaluations so that the findings are actually used. Historically, the various actors in the evaluation process—professional evaluators, funding sources, policy makers, administrators, and service providers—had separate roles and did not often collaborate, or even communicate, with each other. These role distinctions are blurring, however, as practitioners realize they need knowledge about their programs to make improvements and as policy makers and others realize that the complexity of the evaluation process requires broad- based involvement of all key stakeholders. Two evaluation approaches to be dis- cussed later have addressed this issue: empowerment evaluation and participatory evaluation.
PROFESSIONAL EVALUATORS A sizable proportion of the evaluation that takes place in human service organizations is performed by professional evaluators, researchers who use their skills either within the evaluation and research departments of large agencies or as external consultants offering specialized assistance. Whether evaluators are employed by the organization or contracted as consultants, they are expected to bring objectivity to the evaluation process. Their presence brings to the evaluation task a degree of rigor and technical excellence that could not be achieved by less research-oriented human service providers. At the same time, evaluators need to fully engage in dialogue with the client agency and any other stakeholders to ensure that their approach is appropriate for the particular setting. At its worst, an evaluation can focus on the wrong variables, use inappropriate methods or measures, draw incorrect conclusions, or simply be irrelevant to ongoing agency work. Evaluators may produce reports that, although accurate, are too esoteric to be readily understood or used by the people who decide among programs or allocate resources. Evaluators who are overly detached from agency decision making often fail to affect services. Another negative aspect of the use of external consultants as evaluators is agency workers’ tendency to place evaluative responsibility totally in the consultants’ hands. Evaluation can work effectively only if attention is paid to ongoing collaboration with and involvement of agency staff. If no one but the expert evaluator takes responsibility for assessment of progress toward goals, workers will see evaluation as unfamiliar, threatening, and potentially unpleasant. Effective evaluators use their technical expertise not to impose evaluation on unwilling audiences but to work closely with others in developing feasible designs. Thomas (2010) suggests that the best approach is collaboration between outside evaluators and agency staff. The agency and the evaluator should have a clear agreement regarding the goals and methods of the evaluation and the specific roles to be played by the consultants and staff. If consultants work with internal evaluation committees, they can help administrators, service providers, and consumers clarify their goals, expectations, and questions so that the evaluation will meet identified needs. The external evaluator’s objectivity and internal agency workers’ active involvement bring the best of both worlds to the evaluation process.
FUNDING SOURCES Funding sources, particularly organizations providing grants or contracts to human service agencies, can have a positive effect on evaluation. Human service agencies are often required to evaluate projects as part of their accountability to funding sources. Grant applications are expected to include discussions of evaluation designs, and these sections are carefully scrutinized before funding decisions are made. Funding sources could have even more positive effects if attention were focused more on evaluation content rather than simply on form. Funders should not expect that the dollar amount spent on evaluation consultants necessarily coincides with the quality of the research, nor should they accept simple process monitoring as sufficient. Rather, funding sources should press for more effective evaluation of program effectiveness, for both direct consumers and communities. POLICY MAKERS AND ADMINISTRATORS Policy makers and administrators are among the primary users of evaluation because they make decisions concerning the fates of human service programs. Decision makers need evaluation data to inform them of alternatives, just as evaluators need decision makers to make their work meaningful. Agency managers, as well as board members, can make evaluation work more effectively for them if they try to identify the real information needs of their agencies. Evaluations do not have to be fishing expeditions. They can be clear-cut attempts to answer the real questions decision makers pose. If administrators and objective evaluators work together to formulate research questions, the resulting answers can prove both readable and helpful.
HUMAN SERVICE PROVIDERS A football coach in the 1960s (probably Darrell Royal of the University of Texas) said that his teams rarely passed the ball because “when you pass, three things can happen, and two of them are bad.” In a similar way, there can be three outcomes of an evaluation, and two of them would be seen by staff as “bad”: the evaluation could show that the program made no difference, made things worse for clients, or made desired improvements for clients. It is understandable that staff may feel threatened at the prospect of an evaluation. This is even more likely because providers of services have often been left out of the evaluation process. Involving staff and other stakeholders in the design and implementation of the evaluation can mitigate such concerns and will probably also result in a better evaluation process through the use of the program knowledge of these stakeholders. Staff members may also feel victimized by evaluation. They are typically asked to keep accurate records and fill out numerous forms, but they are not involved in deciding what kinds of data are really needed. They are asked to cooperate with consultants making one-time visits to their agencies, but they are not told exactly what these consultants are evaluating. They are asked to make sudden, special efforts to pull together information for evaluators to use, but they are not encouraged to assess their progress toward goal attainment on a regular basis. Many human service workers feel that evaluation is a negative aspect of agency opera- tions, serving only to point out shortcomings in their work, and they tend to provide information in such a way that their own programs are protected. Human service providers could play a much more active and useful role in evaluation if they were involved in the design and implementation of the evaluation, using consultants primarily as technical assistants. Service providers are familiar with changing consumer needs, the relative effectiveness of varying approaches, and the agency itself. Through their involvement with an evaluation committee, they can ensure that the real goals of their programs, the objectives being evaluated, and the work actually being done are all properly addressed. As agencies move increasingly toward becoming learning organizations, as discussed in Chapter 9, staff are more likely to appreciate the value of evaluation in improving their operations and showing the outside work the value of their programs.
CONSUMERS AND COMMUNITY MEMBERS Consumers and other community members need to be involved in planning and evalu- ating, from initial goal setting through developing evaluation designs and assessments of program effectiveness. Consumers are in a good position to be aware of the strengths and weaknesses of service delivery systems and the degree to which observed community needs are being met. Current principles of empowerment of staff, clients, and community members in the human services (Hardina et al., 2006) support involve- ment of these stakeholders in the process. Regardless of the form their participation takes, citizens have a major role to play in deciding how, why, and for whom human services should be provided. Human service agencies are accountable to the communities they serve. Agency managers have a responsibility to ensure that their programs work to accomplish goals that both staff and consumers understand and value.
THE SCOPE OF HUMAN SERVICE EVALUATION Human service evaluation can take many forms. The approach used in any one set- ting is likely to be a function of several variables, including (a) the resources and expertise available for use in evaluations, (b) the purposes for which evaluation results will be used, and (c) the orientations and philosophies guiding agency deci- sion makers. Program evaluations may be categorized in two ways. An evaluation may be categorized based on its purpose: a summative evaluation looks at a pro- gram’s accomplishments or results, typically at or near program completion; a formative evaluation occurs during program operation and is intended to provide feedback and information that staff can use immediately to make program changes and improvements. Evaluations may also be categorized as process evaluations or out- come evaluations. Using the systems approach to program design from Chapter 3, process evaluations focus on activities or outputs: the types and numbers of services that are provided. Outcome evaluations look at intermediate or final outcomes: how client conditions, skills, or knowledge have changed as a result of the program. Human service programs vary tremendously in their approaches to evaluation, running the gamut from simple program monitoring to controlled experiment studying client outcomes. Regardless of their use of resources, depth, or concern for objectivity, however, evaluation efforts need to be reasonably comprehensive if they are to serve any of their stated purposes. Evaluation should provide, at a minimum, basic information concerning program processes and outcomes. Multiple data collection methods and measures (discussed later) will be needed on nearly any substantive evaluation. TYPES OF EVALUATIONS Essentially, program evaluation has four basic objectives: 1. To provide information about the achievement of the program goals and objectives (outcome evaluation) 2. To provide descriptive information about the type and quantity of program activities or inputs (process evaluation) 3. To provide information that will be useful in improving a program while it is in operation (a formative evaluation) 4. To provide information about program outcomes relative to program costs (cost effectiveness), costs per output (unit costs, or efficiency), or financial benefits (cost benefit) We will first review these four types of evaluation and will then address evaluation methods, followed by a discussion of a process for conducting an evaluation There are three general types of outcomes: individual, or client-focused, outcomes; program and system-level outcomes; and broader family or community outcomes (W. K. Kellogg Foundation, 2004). Individual client outcomes are the most com- mon focus of a program and its evaluation. An individual client outcome such as having a former foster youth obtain independent living and a job adequate for self-support could also be part of a program with a system outcome of improving the quality of life for former foster youth. More broadly, family outcomes might include increased parent-child-school interactions or keeping children safe from abuse. A community outcome might be increased civic engagement in a low-income community. Ultimately, then, at the program level the basic question underlying outcome evaluation must be, “To what degree have clients or the community changed as a result of the program’s interventions?” Client change can be evaluated in terms of level of functioning before and after receipt of services. Whether services are designed to affect clients’ adjustment, skills, knowledge, or behaviors, some type of assessment tool must be used to determine whether change in the desired direction has taken place. Outcome evalu- ation requires the routine use of measures such as gauges of behavior change and standardized or specially designed instruments. If a program has been well designed (Chapter 3) and has a complete information system (Chapter 9), it will address all of the elements listed except a plan for use of results. This final point will be covered later, when a program evaluation process is presented.
PROCESS EVALUATION Process evaluations can assess the extent to which a program is implemented as designed and provide a means for determining whether members of target popula- tions were reached in the numbers projected and whether specified services were provided in the amounts required at the quality level expected. As in the case of an outcome evaluation, the program’s logic model and objectives provide a valuable foundation for the process evaluation. A specific type of process evaluation is a formative evaluation. As noted, formative evaluations occur during program implementation, whereas summative evaluations are done at the end of a program or a program cycle. A formative evaluation is intended to “adjust and enhance interventions … [and] serve more to guide and direct programs—particularly new programs” (Royse, Thyer, & Padgett, 2010, p. 112). Using qualitative methods such as interviews, a formative evaluation can also assess how the program implementation process is proceeding, suggesting possible changes in implementation. This type of process evaluation provides funders, the agency board, and any other stakeholders information on how the program is doing with reference to previously identified objectives and standards and also helps agency administrators make adjustments in either the means or the targets of service delivery. Feedback mechanisms must be built into the service delivery system to keep managers informed regarding whether the program is on course, both fiscally and quantitatively.
Process evaluations are usually ongoing; that is, they require the continual retrieval of program data. A process that funding organizations use to receive regular reports on program implementation is known as monitoring. Program monitoring, according to Rossi et al. (2004), is:
the systematic documentation of aspects of program performance that are indicative of whether the program is functioning as intended or according to some appropriate standard. Monitoring generally involves program performance in the domain of program process, program outcomes, or both. (p. 64)
Program goals and objectives are used as the standards against which the eval- uation is conducted. If, for example, Meals on Wheels states in its annual plan of operations that it will deliver 1 meal daily to each of 100 clients per program year, or an annual total of 36,500 meals, then it would be expected that approxi- mately 3,042 meals will be provided per month. A process evaluation would entail the assessment of monthly efforts to provide the prorated number of meals, including whether they were provided to eligible clients (for example, the target population). This type of evaluation would also examine how the agency’s human resources were used to provide the services.
A monitoring process typically includes a representative of the funding organi- zation who is assigned to track implementation of the funded program as well as involvement from designated program staff, usually the program manager and a fiscal officer.
A final type of process evaluation is known as quality assurance (Royse et al., 2010, pp. 132–134). This answers the question “Are minimum and accepted stan- dards of care being routinely and systematically provided to patients and clients?” (Patton, 2008, p. 304). This technique is most commonly associated with the assessment of medical or clinical records and other aspects of the operation of a program or facility that needs or desires accreditation. Governmental organizations such as Medicare and accrediting organizations such as the Joint Commission on Accreditation of Healthcare Organizations and the Council on Accreditation of Family and Children Services issue standards.
EFFICIENCY AND EFFECTIVENESS The data gathered through outcome and process evaluations are sometimes used to measure efficiency and effectiveness. Efficiency is a measure of costs per output, often framed as unit cost. For example, a program that can deliver more hot meals to home-bound seniors for the same cost is seen as more efficient than a program with higher costs for providing the same number of meals. Effectiveness, on the other hand, measures cost per outcome, often described as cost effectiveness. Here, the measure is the cost per successful service outcome, such as gaining employment for an at-risk teenager. A program that arranges jobs of a defined quality for a certain number of youth for a certain cost per job is more cost effective than a similar program that gets jobs for fewer youth at the same cost or has higher costs to acquire jobs for the same number of youth. The simplest efficiency evaluation involves the determination of unit cost. This figure is obtained by dividing the number of service outputs into the amount of dollars allocated (input) for that service. For example, an agency receiving $150,000 per project period to provide counseling services to 150 delinquents children per year could project a unit cost of $1,000 if the unit of service were defined as each unduplicated client (child) served. Of itself, the cost per unit of $1,000 is meaningless without accompanying process and outcome evaluations and without a comparison to at least one other, similar program whose services have also undergone process, outcome, and efficiency program evaluations. In this example, if the outcome is preventing recidivism for at least one year after the completion of the program, the cost of the program can be divided by the number of successful outcomes to determine cost effectiveness on that measure. This becomes more com- plicated if a program has more than one service and more than one outcome. Ideally, a program will have one overriding outcome and only one major service component, making this analysis manageable.
Royse et al. (2010, pp. 258–260) have listed the steps of a cost-effectiveness study. The first three steps should already have been done as part of good program design and implementation. Defining the program model and outcome indicators is the first step. The second step involves developing hypotheses or study questions. For example, a simple question would be, what were the program costs compared to the program results? The third step is computing costs, mostly accomplished through the development of the program budget. This step can be complicated if one program has multiple groups of clients and service packages, but eventually it should be possible to allocate all program costs (staff salaries and benefits, facilities, other non-personnel costs) so that they may be related to program outcomes. The fourth step, collecting outcome data, should already be occurring through the program’s information system. Step five involves computing program out-comes, which would generally be the number of clients for whom there were successful outcomes (for example, no recidivism or rehospitalization, acquisition of self-sustaining employment or independent living status). Next, computing the cost-effectiveness ratio is done by dividing program cost by the number of successful outcomes. The final step, conducting a sensitivity analysis, involves looking at the assumptions about the relationships among program interventions, costs, and effects. For example, if some clients do not attend all assigned sessions, outcomes would not be expected to be as favorable as for clients who attend all sessions. Less common and beyond the scope of the discussion here is cost-benefit analysis (Levin, 2005; Royse et al., 2010, pp. 262–265). This goes beyond cost effectiveness by attributing a financial value to the outcome, thus seen as a benefit to society. A final aspect of effectiveness takes a much broader perspective. Although it is beyond the scope of this book, which focuses on programs, it should be noted that many human service programs are funded and implemented to have a broader effect on social conditions such as ending chronic homelessness or improving com- munity well-being. National evaluations in areas such as welfare reform to increase self-sufficiency of poor families sometimes focus on this level. Such evaluations look at outcomes such as rates of homelessness, but another way to examine results at this level is to assess adequacy of services. For example, if a metropolitan area has 1,000 foster youth who emancipate each year by turning 18 and there are only programs to fund services for 250 youth, this becomes a social policy issue in terms of the adequacy of support to fully address an identified problem. EVALUABILITY ASSESSMENT Before reviewing the actual design and implementation of an evaluation, evaluability assessment will be presented here as a unique type of evaluation. If a program has been thoroughly and thoughtfully designed and implemented, including the use of evidence-based practices, logic models, well-written goals and objectives, and a complete management information system, a program evaluation can be relatively easy. Although the human services field has made tremendous prog- ress in recent decades regarding the design and implementation of programs, there are still many cases in which a program that has been in operation for some time is not configured in a way that makes evaluation easy. For this reason, a preliminary step in the program evaluation process may be to do an evaluability assessment: “a systematic process for describing the structure of a program and for analyzing the plausibility and feasibility of achieving objectives; their suitability for in-depth evaluation; and their acceptance to program managers, policy-makers, and program operators” (Smith, 2005, p. 136). When evaluability assessment emerged in the 1970s, the purpose was “to assess the extent to which measurable objectives exist, whether these objectives are shared by key stakeholders, whether there is a reasonable program structure and sufficient resources to obtain the objectives, and whether program managers will use findings from evaluations of the program” (Trevisan, 2007, p. 290). Trevisan found common recommendations that pointed to weaknesses in program de
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.
