Business & Finance / Management
To complete your Assignment, Compose a cohesive document that addresses the following: See attachment for detailed instructions:
- APA citing
- No plagiarism
Week Assignment 2
Evaluation
To prepare for this assignment, pay particular attention to the following Learning Resources:
· Review this week’s Learning Resources, especially:
· Read Changing Role of Evaluation – See Word doc .
· Basic Principles of Management – See pdf
Final Assignment:
In completing this assignment, please be sure that your work follows essay format. Your work should include significant responses that are supported by outside research. Each response should be a minimum of 75 words and should include a reference list. Your responses should include examples and should be entirely in your own words:
· Using your knowledge from what you have learned this term, analyze and provide an example of how using evaluation results can improve human performance technology projects.
· You have learned about non-profit organizations this term and the importance of the business practices. Using what you have learned, analyze the importance of shift to outcome measurement, impact evaluation, and sustainability from an administrative process.
· You have learned this term that stakeholders play an important role in the success of an organization.
· Summarize how stakeholder analysis and the development of logic models are related to an evaluation plan.
· Using what you have learned this term, define and analyze the implications of the five measurement and evaluation plans for the workplace.
· Using your workplace as an example, select the appropriate measures of reaction, learning, and confidence that might be used in the evaluation process.
· Microsoft Word in APA style.
· Use font size 12 and 1” margins.
· Use at least three references from outside the course material
,
CHAPTER SIXTEEN
Understanding Context: Evaluation and Measurement in Not-for-Profit Sectors
Dale C. Brandenburg
Many individuals associated with community agencies, health care, public workforce development, and similar not-for-profit organizations view program evaluation akin to a visit to the dentist’s office. It’s painful, but at some point it cannot be avoided. A major reason for this perspective is that evaluation is seen as taking money away from program activities that perform good for others, that is, intruding on valuable resources that are intended for delivering the “real” services of the organization (Kopczynski & Pritchard, 2004). A major reason for this logic is that since there are limited funds available to serve the public good, why must a portion of program delivery be allocated to something other than serving people in need? This is not an unreasonable point and one that program managers in not-for-profits face on a continuing basis.
The focus of evaluation in not-for-profit organization has shifted in recent years from administrative data to outcome measurement, impact evaluation, and sustainability (Aspen Institute, 2000), thus a shift from short-term to long-term effects of interventions. Evaluators in the not-for-profit sector view their world as the combination of technical knowledge, communication skills, and political savvy that can make or break the utility and value of the program under consideration. Evaluation in not-for-profit settings tends to value the importance of teamwork, collaboration, and generally working together. This chapter is meant to provide a glimpse at a minor portion of the evaluation efforts that take place in the not-for-profit sector. It excludes, for example, the efforts in public education, but does provide some context for workforce development efforts.
CONTRAST OF CONTEXTS
Evaluation in not-for-profit settings tends to have different criteria for the judgment of its worth than is typically found in corporate and similar settings. Such criteria are likely to include the following:
How useful is the evaluation?
Is the evaluation feasible and practical?
Does the evaluation hold high ethical principles?
Does the evaluation measure the right things, and is it accurate?
Using criteria such as the above seems a far cry from concepts of return on investment that are of vital importance in the profit sector. Even the cause of transfer of training can sometimes be of secondary importance to assuring that the program is described accurately. Another difference is the pressure of time. Programs offered by not-for-profit organizations, such as an alcohol recovery program, take a long time to see the effects and, by the time results are viewable, the organization has moved on to the next program. Instead we often see that evaluation is relegated to measuring the countable, the numbers of people who have completed the program, rather than the life-changing impact that decreased alcohol abuse has on participants. While the latter is certainly important, the typical community-based organization (CBO) is limited in its resources to perform the long-term follow-through needed to answer the ultimate utility question. Thus, the choice of what is measured tends to be the result of negotiation with stakeholders. The broad goals of evaluation tend to be grouped among the following:
Understanding and sharing what works with other organizations and communities;
Building sustainability of programs and ensuring funding;
Strengthening the accountability of the programs with various public constituencies;
Influencing the decisions of relevant policy makers and program funders;
Building community capacity so that future engagements have greater community participation; and
Understanding where the program is going so that it results in reflecting on progress that can improve future programs.
These goals reflect some of the typical objectives for applying evaluation in not-for-profit settings. The goals embody specific activities that can be designed to collect evidence on a program’s effectiveness, be accountable to stakeholders, identify projects for improvement, clarify program plans, and improve communication among all groups of stakeholders. The types of programs or actions that are designed to improve outcomes for particular individuals, groups, or communities considered in this chapter include the following:
Direct service interventions: improve the nutrition of pre-school children;
Research endeavors: determine whether race disparities in emergency room care can be reduced;
Advocacy efforts: campaign to influence legislation on proper use of infant car seats; and
Workforce training programs: job training program to reduce unemployment among economically disadvantaged urban residents.
The results of evaluation in not-for-profit settings are typically designed to provide information for future decision making. Most efforts can be grouped into three categories: process evaluation, short-term evaluation, or long-term (outcome) evaluation. In fact, it is rare to see a program evaluation effort that does not include some process evaluation. Process evaluation is considered important because it typically yields an external view of how the program was conducted; in other words, it provides a detailed description of the objectives, activities, resources used, management of the program, and involvement of stakeholders. It would be difficult to judge any outcomes of a program without understanding the components of the program in detail. That is why process evaluation is important. Short-term evaluation deals with the accomplishment of program objectives or the intermediate links between the program activities and the long-term outcomes. Outcome evaluation is associated with the long-term effects such as health status or system changes that are often beyond the range of a typical evaluation effort.
STAKEHOLDER ANALYSIS
One of the key tools used to structure the evaluation process in not-for-profit settings is the stakeholder analysis. It is a key initial stage to develop the primary issues and evaluation questions from which other stages of the evaluation process can be built. The stakeholder analysis is designed to identify the needs and values of the separate stakeholders and combine the results so that an adequate plan can be developed. It is rare to find an evaluation effort in not-for-profit situations for which there are at least three stakeholder groups with a major interest in the effort. The stakeholder analysis is a means to organize the political process and create an evaluation plan, as well as satisfy the desires of the different perspectives and needs for information. A first step in the process is to identify all groups or individuals with a stake in the process, followed by a second dividing the groups into primary and secondary members. Primary members are those stakeholders who are likely to be direct users of the evaluation results; whereas secondary stakeholders may have an interest in the findings, but the results of the evaluation are not likely to impact them directly.
For example, in the evaluation of a workforce development project for urban adults, primary stakeholders would include the sponsoring agency or funder, the manager of program delivery, program developers, instructors, a third-party program organizer, and partner organizations that might include community-based organizations, a local business association, and a representative from the city government. Secondary stakeholders might include parents or relatives of program participants, local welfare officials, advocacy groups, and the participants themselves. A list of stakeholders can be determined by answering the following questions:
Who pays for program staff time?
Who selects the participants?
Who champions the desired change?
Who is responsible for after-program behavior/performance?
Who determines success?
Who provides advice and counsel on program conditions?
Who provides critical guidance?
Who provides critical facts?
Who provides the necessary assistance?
Note that these questions do not identify any of the purposes of the evaluation, only a means to distinguish who should be involved in evaluation planning. The amount of stakeholder involvement in the evaluation can be limited or it can be substantial enough so that they assist in the design of the overall process, including the development of data collection protocols. The involvement of stakeholders should also lead to increasing the value and usefulness of the evaluation (Greene, 1988) as well increasing the use of the findings. Such a process increases the ownership in the report and possibly leads to a common vision of collective goals (Kopczynski & Pritchard, 2004).
The second step of performing a stakeholder analysis involves the identification of the requirements, or primary evaluation purposes, and aligning those against the list of primary stakeholders. The matrix shown in Table 16.1 is an example of this analysis for a publicly funded workforce development program for inner city adults.
The “requirements” or purposes of the evaluation can be developed in a number of ways, but the major sources of the list usually come from the published description of the program, supplemented by interviews with the stakeholders. It would be ideal if the matrix could be completed in a group setting, but it is more likely that the evaluator develops the chart independently and then shares the results in a program advisory meeting. The “X” in a given box represents where a requirement matches the possible use of that information by the selected stakeholder. One can note, for example, that program Strengths and Weaknesses are needed for staff internal to the development and execution of the program but not of particular interest to those outside that environment. Another outcome of this process is at least a general expectation of reporting relationships that would occur during the communication of the interim and final results of the evaluation.
While it might seem that the stakeholder analysis might focus on the program itself, there are other outputs that could be entered into the analysis. Results of evaluation data are often used to tell compelling stories that can increase the visibility and marketing of the organization and increase accountability with board members and community, as well as attract additional sources of revenue. These findings were supported by a national survey of local United Way organizations as reported by Kopczynski & Pritchard (2004).
EVALUATION PURPOSES
While the stakeholder analysis establishes the initial phase of understanding evaluation purposes, further definition is supplied by a list of evaluation issues. These issues are typically generated during the stakeholder analysis after further understanding of the program description. Evaluation issues can range from general questions about program impact to detailed questions on the selection of service providers. A sample (Gorman, 2001) set of questions or primary issues from a low-income workforce development project that leverages resources from a local housing authority are listed below:
Describe the major partners leading, managing, and staffing each major activity area.
Describe the major program areas and the target population service goals by area. What are the major strategies to be employed to reach these service goals?
To what extent does the program assist in decreasing or alleviating educational barriers to sustained employment? What were the major instructional and training strategies employed to attain the stated goals and objectives?
To what extent does the program assist in decreasing or alleviating personal and social barriers to employment: poor work histories, cultural barriers, substance abuse and developmental disabilities, and limitations of transportation and adequate childcare?
What were the participant impacts: wages, self-efficacy, and employability?
Was there differential effectiveness of the program relative to categories of the target population: public housing residents, non-custodial parents, learning-disabled individuals, persons with limited English proficiency, and other economically disadvantaged groups?
Was there differential effectiveness relative to the categories of high-demand sectors targeted by the program: building/construction trades, health care, and hospitality?
What evidence indicates that the program management and its partners can actively disseminate/replicate this program in other regions via its current programs?
What are some key success stories that serve to illustrate overall program value to participants and community?
How cost-effective was the project in general terms? By program area? How did costs or efficiency of service change over the course of the project?
What were the major lessons learned in the project? How do these relate to self-sufficiency for the target population? Community economic development? Leadership effectiveness?
These results were obtained from a stakeholder analysis and are not yet grouped into evaluation components. Combining these results with the stakeholder analysis assists in defining the overall evaluation plan. It provides the framework for developing the data requirements as well as the type of measurement needed for instrumentation.
LOGIC MODELS
The final phase for planning the evaluation is the development of a logic model, a description or map of how the major components of an evaluation are aligned; that is, the connection between how the program is designed and its intended results. Logic models can be used in any phase of the program development cycle (McLaughlin & Jordan, 2004) from initial design to examining long-term impact. It is a means to make the underlying theory behind the intervention more explicit and to discover its underlying assumptions. Even the development of a model, that is, mapping out all of its components, can be instructive for program staff and other stakeholders. A logic model is particularly useful for evaluators both as an advance organizer as well as a planning tool for assessment development (McLaughlin & Jordan, 2004). In many situations, such as community or public health, the specification of a logic model is a proposal requirement. Whether or not an evaluator builds on an existing logic model or develops a preliminary version for the evaluation plan, the map created is a visualization of how the human and financial investments are intended to satisfy program goals and lead to program improvements. Logic models contain the theoretical and practical program concepts in a sequence from input of resources to the ultimate impact.
Most logic models have a standard nomenclature (see the Kellogg Foundation guidelines [W.K. Kellogg Foundation, 2007] and McLaughlin & Jordan) that contain the following elements:
Resources: program inputs like needs assessment data and capabilities (financial, human, organizational partnerships, and community relationships) that can be allocated to the project.
Activities: the tasks or actions the program implements with its resources to include events, uses of tools, processes, or technology to perform actions to bring about intended changes or results.
Outputs: the direct products of program activities or services delivered by the program, even reports of findings that may be useful to other researchers.
Outcomes: both short-term (one to three years) and longer-term (four to six years) specific changes in the targeted individuals or organizations associated with behavior, functioning, knowledge, skills, or status within the community. Short-term outcomes are those that are assumed to be “caused” by the outputs; long-term outcomes are benefits derived from intermediate outcomes.
Impact: the ultimate consequences or results of change that are both intended and unintended for the individuals, organizations, or communities that are part of the system, generally occurring after program conclusion.
Whatever the process, such as a focus group, used to create a logic model, another useful outcome is a listing of key contextual factors not under the control of the program that might have both positive and negative influences on the program. These context factors can be divided into two components—antecedent conditions and mediating variables (McLaughlin & Jordan, 2004). Geography, economic conditions, and characteristics of the target group to be served are examples of the former; whereas staff turnover, new government incentives, and layoffs for a local major employer are examples of the latter.
The example shown in Figure 16.1 is a portion of a logic model from public health and concerns a neighborhood violence prevention program (Freddolino, 2005). The needs were derived from data that showed that poverty and violence were significant threats to the health of residents in Central County. The general goals of the program were to increase the capacity of local neighborhood groups to plan and implement local prevention programs and work to establish links between existing programs and agencies. Many logic models are likely more complex than the example shown in that specific actions are linked to given outputs and outcomes in order to differentiate the various conceptual links in a program.
Logic models can be constructed in various ways to illustrate various perspectives of a program. Figure 16.1 represents an activities-based approach that concentrates on the implementation of the program, and it is most useful for program management and monitoring. A second approach based on outcomes seeks to connect the resources to the desired results and is specifically geared to subdivide the short-term, long-term, and ultimate impact of the program. Such models are most useful for designing evaluation and reporting strategies. The logic model developed for a program should suggest the type of measurement required in order to prove or improve the model specified. Since the model is related to performance objectives, it can also lead to assist in judging the merit or worth of the outcomes observed.
A third type, the theory approach emphasizes the theoretical constructs behind the idea for the program. Such models concentrate on solution strategies and prior empirical evidence that connect the selected strategies to potential activities and assumptions. These are most useful for program planning and design. Regardless of the type of model selected, each is useful for the evaluator to provide more comprehensive descriptive information that can lead to effective evaluation design and planning.
Collaborative projects can also suffer from lack of comparability in implementation due to inconsistent allocation or availability of resources across partner organizations. This lack of comparability in program resources often thwarts attempts at gathering data to compare results across sites and leads to misinterpretation of findings or no consistent findings. An example evaluation plan (Gorman & Brandenburg, 2002) from a multiple-site consortia project managed by a systems integrator and funded by the U.S. Department of Labor is provided in Table 16.3. In this case, the systems integrator, American Community Partnerships, is an affiliated organization associated with a large U.S. labor union whose goals, in part, are to promote high-wage jobs in union roles for economically disadvantaged urban residents. This particular program operated in some of the very poorest neighborhoods in the respective cities involved.
One can note that in complex arrangements represented in Exhibit 16.1, that the management of internal and external partner organizations is crucial to obtaining the needed services for the target population as well as attempting to meet the long-range goal of program sustainability. Initial sponsor-allocated funds cover the start-up of the partnership, but the leveraging of existing funds and other resources are required for the overall effort to be successful. Other evaluation can be considerably more complex when funded by two or more federal agencies. The report provided by Hamilton and others (2001) is such an example and provides a description of an evaluation funded over five years by the U.S. Departments of Labor and Education.
Foundation Guidelines
Numerous resources are available on outcome measurements that are designed to assist not-for-profit organizations. A current major source of information can be found through the website of the Utica (New York) Public Library, associated with the Foundation Center, a foundation collaborative. The site is annotated and contains links to major foundations, professional associations, government sponsors, technology uses, data sources, statistical information, best practices, benchmarking, and assessment tools.
Another set of evaluation models can be derived from an examination of community change efforts. Two organizations that have funded a considerable number of these actions include the W. K. Kellogg Foundation and the Annie E. Casey Foundation. Both organizations offer assistance in designing evaluation efforts. These foundations, among others, want their grantees to be successful in their efforts, so they have developed evaluation guidelines for potential grantees to build in evaluation efforts at the time of proposal writing. More extensive guidelines exist for larger-scale efforts. Especially instructive in this regard is Kellogg’s perspective on evaluating social change in that they have constructed a framework especially for external evaluators. Based on the dynamics of social change and the context of the program, they detail four types of evaluation designs: exploratory, predictive, self-organizing, and initiative renewal. Given the complexity of the type of program under review, evaluators are charged to pay attention primarily to the major aspects of the system change and disregard those features that are of minor consequence.
agencies and program staff in understanding the major issues to be considered. These organizations can be quite effective because they understand the local and regional context.
DATA COLLECTION AND MEASUREMENT
As can be concluded from some of the previous discussion, data collection and measurement for evaluation in not-for-profit settings can range from the deceptively simple to the complex. Most data collection schemes tend to be customized to the environment of the program being evaluated. There are a number of reasons for this. The measurement of a program is most often linked to stakeholder considerations and the needs of the sponsoring organization, as opposed to creating an elegant experimental design. The use of both quantitative and qualitative data is applied to answer the evaluation questions posed at the outset of the investigation.
Second, there is a strong bias for producing data that are useful for all stakeholders, and many users, such as directors of community-based organizations are not used to interpreting sophisticated statistical analyses. Data on effectiveness of programs often can be derived from solid descriptions of program activities and survey data. This is not to say that measurement is based on the lowest common denominator of sophistication, but the end-result is to use the findings in a way that can improve program activities. The availability of resources within not-for-profits generally implies that program staff collects data for monitoring purposes, that is, what is countable from an administrative perspective (Kopczynski & Pritchard, 2004). Other not-for-profit organizational shortcomings may enter into the development of a plan for a comprehensives data collection scheme, namely “the lack of appreciation of data, lack of training, poorly developed information systems, and high turnover rates” (Kopczynski & Pritchard, 2004, p. 652), besides the issue of limited budgets.
A third reason to customize data collection is that they may be limited by budget considerations. Elegant and rigorous designs cost more to implement and can have an indirect effect on program activities if certain target participants cannot participate in the primary intervention. This is not to say that rigorous designs are not applied in not-for-profit settings, especially in the public health domain. One such effort (Huffman & others, 2002) funded by the Packard Foundation presents a wealth of information on outcome measurement, valid evaluation constructs, and a set of principles to apply in such settings. Subjects of workforce development programs, for example, are often difficult to recruit, so allocating a portion of participants to a control setting is not cost-effective in many cases. Even more challenging would be follow-up data collection efforts on a homeless population (Kopczynski & Pritchard, 2004).
It is probably instructive at this point to introduce an example of the type and range of data needed to satisfy requirements in a large-scale workforce development effort. Using the same program (Gorman & Brandenburg, 2002) from the example in Table 16.3, Exhibit 16.1 represents a listing of the data elements selected to provide a comprehensive evaluation.
REPORTING RESULTS
If evaluation reports are to be useful for all stakeholders, it is important to consider the communication of evaluation results early in the design of the evaluation process. Such considerations are needed because the demand for clear and credible information from not-for-profits is high (Kopczynski & Pritchard, 2004). In general, it is prudent to negotiate what results are to be presented to whom at the time the stakeholder analysis is conducted. Final reports of evaluation are typically reported in two stages: (1) a formal written report that contains all details of the design, implementation, data collection, data analysis, findings, and conclusion with possible recommendations and (2) a second summary version, such as a PowerPoint presentation that lists the highlights of the investigation. Other reports, such as interim findings or data from a single site in a multiple-site investigation, results of a set of interviews or a survey, or case descriptions, may also be added, depending on funding and stakeholder needs.
Regardless of the type of report, there is still a need to organize reports so that stakeholders may be guided to make appropriate decisions and plan future actions based on the findings. An evaluation that contains as many data elements as depicted in Exhibit 16.1 would need to be organized efficiently if stakeholders were to be able to comprehend and sift through the volume of data likely to be generated. Smock & Brandenburg (1982) suggest a tool to aid in the process of organizing data and presenting findings as portrayed in Table 16.4.
While the classification shown may be an oversimplification of available data, it is nonetheless quite useful in presenting data to unsophisticated users. The concept underlying its structure is that the entirety of information may be ranked hierarchically from its continuous mode into three levels. These three levels represent an artificial trichotomy from very general (overall success) to success or failure of specific program components to very specific findings that have meaning to possibly a single stakeholder. Level I information is the most general, must be inferred from a variety of findings (rolled up across data elements), permits a general summary of findings, can be applied to most settings in which the question asked is limited to a few statements, and is useful for general management “go or no go” decisions, the kind of final judgment that a sponsoring program officer would be interested in. This type of data would be the primary target to use in a summary presentation of the findings.
Level II information represents data useful for identifying the strengths and weaknesses of a program by its representative components, for example: the materials worked well, instructional delivery could be improved, stakeholders were satisfied, recruitment should be strengthened, or case work referrals worked well. Comparisons across sites can be made only in cases in which program elements are identical or very similar and this comparison information, when legitimate, could be included in the Level I report. The Level II information is most usefu
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.