Program Evaluation Proposal: Project 2 involves designing a program evaluation for students respective programs (i.e., Advocacy and Organizational Development; Educationa
Hi, summer end 2 is where I need assistance, the other documents are examples please do not plagiarize this also builds on the links from summer end 1 assignment. Look at document labeled "summer end 2 help" for my input and support.
5
Project 2 Program Evaluation Proposal: Project 2 involves designing a program evaluation for student’s respective programs (i.e., Advocacy and Organizational Development; Educational Psychology, Applied Research and Evaluation). The proposal should address the impacts of the program, its implementation or both. The proposal should clearly delineate a feasible evaluation plan that draws on course readings, lectures, exercises and presentations. The proposal is worth 25% of your total grade and should address all elements described below.
Deliverable : Proposals should address elements detailed below. Proposals should be type-written, doubled spaced in 12pt font. Submit proposals by End of Day 30 July 2022 via CANVAS.
ELEMENTS OF THE PROPOSAL
The proposal shall include: (a) an introduction, (b) method, (c) proposed analysis, (d) discussion, and (e) a reference page. Consult Mertens and Wilson (2019), and the APA Manual (2010) to address all elements of the proposal. Brief descriptions for each section are provided below.
a. Introduction – Provide relevant background and focus for the evaluation proposal. What does the program seek to accomplish? Why is it important? What are the goals, objectives, and purposes of the program? What is the program’s “theory of cause and effect” (i.e., why and how will the program accomplish its goals, objectives and purposes?). What question(s) does the evaluation seek to answer? Why are these questions selected? Is there any literature that can inform the evaluation (reference it appropriately)? See Mertens and Wilson (2019) Chapter 7-8.
b. Method. Design—Identify and describe the evaluation design. Explicate how the evaluation design relates to the goals, objectives, and purposes of the evaluation and comment on the appropriateness of the design to answer the evaluation objectives (e.g., Is the design appropriate to the evaluation goals, objectives and purposes? Why did you select this design over other alternatives? What sources of potential invalidity are ruled out with this design and what sources might remain?). Data and Measures—describe the data you propose to collect for the evaluation (e.g., what types and quantity of data will be collected?) and comment on the appropriateness of these data (e.g., What constructs will you focus on and how will you measure them? Are these appropriate given the evaluation goals, objectives and purposes? What evidence of reliability and validity can be provided for your measures?). Be sure to include example measures if available. Procedures–Describe the context and logistics of the proposed data to be collected (e.g., what are the characteristics of the setting in which data will be collected? What requirements in terms of personnel, space, equipment and means of financing? What is the timeline for completing the evaluation? Do you anticipate any potential problems/obstacles and how do you propose to deal with them?). See Mertens and Wilson (2019) Chapter 9-11.
c. Analyses and Findings. Analyses—Describe the analytical approach taken to evaluate the data and comment on the appropriateness of the analyses in relation to the evaluation goals, objectives and purposes (e.g. How do you plan to analyze the data and how will these analyses address the evaluation goals, objectives and purposes? What specific analytical techniques will you use to address the evaluation goals, objectives and purposes [e.g., quantitative–ANOVA, Correlation, Regression; qualitative—content analyses]). Discuss strengths/weakness of your proposed analytical approach. Findings—Describe the anticipated findings of the evaluation and comment on how the findings relate to the evaluation goals, objectives and purposes (e.g., How will the findings specifically address the evaluation goals, objectives and purposes or provide evidence to support, refute or inform the program? What alternative interpretations may be ruled out by your proposed approach and what plausible alternative or rival explanations can be made? Be sure to describe limitations or qualifiers of your proposed approach?) See Mertens and Wilson (2019) Chapter 12.
d. References —You should list all references cited in the proposal in APA editorial style.
6
Project 2 Evaluation Form
Individual Project 2 Program Evaluation Proposal: Project 2 involves designing a program evaluation for student’s respective programs (i.e., Advocacy and Organizational Development; Educational Psychology, Applied Research and Evaluation). The proposal should address the impacts of the program, its implementation or both. The proposal should clearly delineate a feasible evaluation plan that draws on course readings, lectures, exercises and presentations. The proposal is worth 25% of your grade.
Background & Focus (6 Points) |
Not at all 0 |
Partially 1.5 |
Completely 3 |
Total Points |
a. Does the narrative describe the background and context for the evaluation? |
||||
b. Does the narrative describe relevant aspects of the program (e.g., its components, logic model or program theory) and its purpose (e.g, goals, objectives, and evaluation questions)? |
||||
Evaluation Method (12 Points) |
Not at all 0 |
Partially 1.5 |
Completely 3 |
Total Points |
c. Does the narrative describe the data used to assess the evaluation objectives? (e.g., types and amount of data collected and why?) |
||||
d. Does the narrative comment on the appropriateness of the data? (e.g., relevance of data to goals, objectives and purposes, operationalization of constructs, measurement qualities–reliability and validity) |
||||
e. Does the narrative describe the evaluation design and comment its appropriateness for the evaluation (e.g., appropriateness of the design given the goals, objectives and purposes; advantages and disadvantages of the design— threats to validity, alternative designs, pros and cons) |
||||
f. Does the narrative describe the context and logistics of the proposed data to be collected (e.g., what is the setting for data collection and requirements in terms of personnel, space, and equipment? What is the timeline data collection and for completing the evaluation? Are there any anticipated obstacles/problems and how do you propose to deal with them?) |
||||
Analysis and Findings (6 Points) |
Not at all 0 |
Partially 1.5 |
Completely 3 |
Total Points |
g. Does the narrative describe the analytical approach taken to evaluate the data and comment on the appropriateness of the analyses (e.g. How will you analyze the data and how will these analyses address the evaluation goals, objectives and purposes? What specific analytical techniques will you use to address the evaluation goals, objectives and purposes [e.g., quantitative–ANOVA, Correlation, Regression; qualitative—content analyses]?) |
||||
h. Does the narrative describe anticipated findings of the evaluation and comment on how findings relate to the evaluation goals, objectives and purposes? (e.g., How will the findings specifically address the evaluation goals, objectives and purposes or provide evidence to support, refute or inform the program? What alternative interpretations may be ruled out by your proposed approach and what plausible alternative or rival explanations can be made? Be sure to describe limitations or qualifiers of your proposed approach?) |
||||
References (1 Point) |
Not at all 0 |
Partially 5 |
Completely 1 |
Total Points |
i. Are references cited in accordance with APA guidelines (e.g. author, year, title, source and pages)? |
,
1
Article Review
The purpose of this evaluation was to examine the efficacy of a Rural Infant Care
Program (RICP) designed to reduce infant mortality rates in rural communities from nine states
(Gortmaker, Clark, Graven, Sobol, & Geronimus, 1987). The RICP is based on the notion that
high infant mortality rates are due to deficits in the perinatal system (e.g., system deficits–See
Table 1). Accordingly, the RICP proposes that by bringing together key personnel (e.g., local
providers, medical school personnel, state health department) improvements in the perinatal
system (e.g., training providers, increasing referral rates, regionalizing tertiary centers) can be
made which would lead to lower levels of infant mortality in rural communities (See Table 1).
The RICP provided funding to 10 medical schools with programs designed to improve
the delivery of health services to mothers and infants in rural areas. These programs aimed to
improve access to perinatal care, improve the transportation of sick neonates, upgraded
professional skills in rural hospitals and increase referrals of high-risk pregnancies to tertiary
centers (See Inputs/Obj. in Table 1). These ten sites were selected because they had infant
mortality rates above their state’s level for 1977, had a minimum of 1000 births per year and
were located in states with IPO projects.
A time series design was employed to examine whether the RICP reduced infant
mortality rates above those expected in the absence of the program (See figure 1). This design
has the advantage of controlling for both maturation and history effects. It allows researchers to
determine if the changes in infant mortality rates can be attributed to the RICP intervention.
However, a time series design can not control for instrumentation effects–the use of the
same/different instrument over various time periods. In the present study infant mortality rates,
natality data and vital statistics from various sources (e.g., National Center for Health Statistics,
2
State Health Departments, and published/unpublished State vital statistics) were utilized to
examine changes in mortality rates both pre and post-intervention. The reliance on multi-source
data can increase the potential for instrumentation effects (e.g., reliability/validity of the DV
measures) and thus limit the strength of the results of the study. Infant mortality rates from
various sources (e.g., National Center for Health Statistics, State Health Departments, and
published/unpublished State vital statistics) were compared to rule out instrumentation effects.
The results of these analyses showed that there were no significant differences in infant mortality
rates. Thus, the design employed in the present study (partially) controlled for instrumentation
effects as well as maturation and history effects. In addition, this design has the advantage of
controlling for regression and (partially controlling) selection effects as well. These design
characteristics are noteworthy since they allow the evaluators to rule out a number of alternative
explanations for the results including (1) that the program effects were due spillover effects from
the IPO projects, (2) that the program effects were due to some special event co-occurring with
the intervention (e.g., history), (3) that the program effects were due to some change in the
participants (e.g., maturation), and (4) that the program effects were due to the composition of
the participants (e.g.,selection; regression).
The use of a time series design is particularly appropriate for evaluating the RICP
intervention. This design take advantage of the fact that multiple data points can be retrieved for
consecutive time periods both before and after the initiation of the RICP (See figure 1). In fact,
three separate comparisons tests were conducted to determine whether the RICP was effective.
These included a comparisons between RICP and non-RICP areas (e.g., non-RICP areas
included counties not targeted to receive RICP funding with lower infant mortality rates), a
comparison between RICP areas and eligible RICP states not funded, and a comparison between
3
RICP areas to matched rural areas with IPO funding. Time-series regression models were fit to
examine whether the RICP was effective. Success of the RICP was determined by a change in
infant mortality rates beginning in 1979.
In general, the RICP was successful in reducing infant mortality rates in nine out of ten
sites. Time series regression models revealed that declines in neonatal mortality were
attributable to the RICP. Furthermore, there was a sharp drop in neonatal mortality beginning in
1979 in the RICP areas. By 1982-1984 the neonatal rates in the RICP areas were similar to those
found in non-RICP areas. In addition, there were no significant differences in postneonatal
activity associated with the RICP and there were no significant changes observed in the non-
RICP comparison areas. Thus, the authors conclude that “The RICP demonstrated the value of
local initiative in addressing these problems and showed that effective cooperation can be
achieved among local physicians and nurses, hospital administrators, local health departments,
state health departments, and tertiary hospitals” (Gortmaker et al., 1987, p. 114).
Though encouraging, the result of the present study have several limitations that pertaing
to the external validity of the results. First, as the authors note, the ten sites included in the study
were chosen because they were well organized. That is, the sites already had an existing network
which facilitated the implementation of the RICP intervention. This is particularly important
since it raises questions about the extent to which the RICP may be equally successful in rural
communities without such established networks (e.g., generalizability). The authors note that
despite this limitation infant mortality rates were still reduced. However, the magnitude of this
effect remains an open question. It may be that a selection factor may account for the program
effects.
4
A second limitation, also noted by the authors, concerns the lack of random assignment
of geographic areas to treatment and control groups. While there are ethical issues involved with
this decision (e.g., withholding of treatment to control group participants), it is important to
remember that a selection factor may be (partly) responsible for the results of the study. The
lack of random assignment compounds the potential for a selection bias already noted. For
example, it may be that patients who are the recipients of RICP benefits (e.g., pregnant women)
may be different from those attending other rural hospitals. Though, the authors tried to deal
with this issue by comparing Non-RICP areas with lower mortality rates (e.g., non-RICP
comparison areas), similar mortality rates (e.g., IPO-76 programs) and a matched rural area (e.g.,
IPO-78 programs), it remains unsolved. This issue becomes even more problematic when we
consider the fact that though there were overall reductions in mortality rates in the nine sites,
only “three of the reductions were statistically significant” (Gortmaker et al., 1987, p. 106). This
may suggest that the reductions in infant mortality may be due to some selection X treatment
effect. That is, the combination of a specific site and the treatment used at that site.
Thirdly, because the nature of the intervention required prior planning and coordination
on the part of network participants it is possible that this activity alone may account for the
obtained results. This is clearly an alternative hypothesis which cannot be ruled out by the study.
In fact, the authors note that program meetings in the targe areas “were mostly informational [at
first]. . . Important contacts, however, were made at this stage. . .local physicians. . . often
[met] for the first time [with] doctors from tertiary centers” (Gortmaker et al., 1987, p. 97).
Thus, the extent to which these early meetings may have influenced the results of the study are
unknown.
5
Finally, a larger question remains unanswered–are the expenses associated with the
implementation of the RICP justified by the results? A cost-benefits analysis (Shortell &
Richardson, 1978) would shed some light on this issue. If the cost to benefit ratio was such that
the benefits outweighed the cost then clearly the program could be deemed effective. On the
other hand, if the cost outweighed the benefits, then it would certainly call into question the
desirability of replicating such efforts.
Clearly, these issues raise some concerns about the implementation of this program in
other hospitals. In the best case scenario, this program could be implemented in hospitals with
existing networks of providers that are willing to participate in the RICP intervention. In the
worst case, replication of this project in a random fashion would not be desirable. Given that
only three sites reported statistically significant reductions in infant mortality rates, careful
considerations must be given to the fit between the RICP intervention and the characteristics of
the setting in which it will be implemented.
References
Gortmaker, S.L., Clark, C.J.G., Graven, S.N., Sobol, A.M., & Geronimus, A. (1987). Reducing
infant mortality in rural America: Evaluation of the rural infant care program. Health
Services Research, 22(1), 91-116.
Shortell, S. M., & Richardson, W. C. (1978). Health program evaluation. Saint Louis, MO. The
C. V. Mosby Company.
6
Table 1. Process Model of Evaluation.
Preexisting Conditions Program Components Intervening Events Impact/Consequences
High infant mortality
Individual Differences
-Low SES
-underinsurance
-isolation
-low Ed. levels
-inadequate housing
System Deficits
-Poor communication
among providers
-Low referrals rates to
teriary centers
-Limited Knowledge,
Skills, Abilities
(KSAs) among
providers
-No regionalization of
tertiary centers
-Limited success with
births of LBW infants
Improve delivery
services to mothers &
infants in target areas
Inputs/Obj.
-Increase access to
perinatal centers
-Improve
transportation of sick
neonates
-Upgrade KSAs of
providers
-Increase referral of
high-risk pregnancies
Resources
-Special funding
-Medical School
-Local providers
-State health Dept.
-Adm. Support
-Travel Costs
Activities
-Conduct program
meetings
-Identify problems
-Conduct needs
assessment
-Provide training
-Upgrade facilities
-Transport sick
neonates
-Expand Well-child
clinics
-Develop High-risk
OBGYN clinics
External
-IPO projects
-Announcement of
RICP
Internal
-Organization of
provider network
-Regionalization of
services
-Lack of interest in
RICP
-Reduce infant
mortality
-Increase referrals
-Greater cooperation
/communication
among providers
-Increase KSAs of
providers
7
Figure 1. Evaluation Design.
O1965 O1966 O1967 O1968 O1969 O1970 O1971 O1972 O1973 O1974 O1975 O1976 O1977 O1978 X1979 O1979 O1980 O1981 O1982 O1983
O1984 O1985
,
Chapter 8 Notes Evaluation Purpose, Types and Questions Most evaluations have multiple purposes. To focus the evaluation it is important to identify people with whom to work with in the evaluation process—beginning to end—in order to discern why the evaluation is needed, who may need to be involved, the type of evaluation that may be appropriate and the types of questions that may be pursued. This is an important and delicate task as it can serve to undermine or compromise the evaluation process. Evaluations can be characterized as legitimate and illegitimate. If evaluations are conducted to support forgone conclusions or as a public relations pieces, then these are illegitimate purposes for evaluation and should be avoided. Legitimate purposes of evaluation are undertaken with an honest desire to gather information that is well balanced and adheres to the standards and guidelines for program evaluation. The majority of evaluations are conducted for multiple purposes. Therefore, the principle task for evaluator is to discern the purposes and identify the most appropriate approach to achieve these purposes. Evaluations can serve one of four general purposes:
To gain insights or to determine necessary inputs
To find areas in need of improvement or to change practices
To assess program effectiveness
To address human rights and social justice We will consider these general purposes and identify types of evaluation that are subsumed under each and identify questions that illustrate them.
TO GAIN INSIGHTS OR TO DETERMINE NECESSARY INPUTS Capacity Building—Evaluators can benefit from knowing about an organizations past experiences with evaluation, as well as its expertise in and willingness to use evaluation as a way of ongoing improvement. If the expertise is absent or insufficient, then evaluators or need to undertake capacity building with the organization by instituting training programs, workshops, or community meetings to this end. Need/Asset—needs and asset assessment can provide a picture of the community (context), identify demographic groups and geographic areas in need; provide guidance in the prioritization and use of resources such as funding to address important needs; and convince policy makers that they should support initiatives.
Examples Questions
Context Organizational
To identify needed inputs, barriers, and facilitators to program development or implementation
What are the values that underlie this project and how do those map onto the values of the parent organization? What is the nature of the relationship between the project and the parent organization in terms of finances, physical space and administrative structures? How does the leadership and organizational structure support or impede the success of the project? What the characteristics of the staff and the leadership? What is the organizational culture with regard to the project and the evaluation? What resources are available in terms of funding, staffing, organizational support, expertise and educational opportunities?
Capacity Building
To assess and build capacity in the community
What is the organization past experience with evaluation? What is their expertise or willingness to use evaluations? What training, workshops, or other activities have been done or are needed?
Need/Asset To assess needs, desires, assets of community members
What issue or problem is concerning to you? What knowledge do you have about it right now? What resources are available to you to understand this issue? Which groups of people are most affected by the discrepancy between what is and what should be? Are there differences in opinion on this? What has the organization done in the past to address this discrepancy? What are the challenges that still remain? How should we adapt the program we are considering? Should we being the program? Is there sufficient need to warrant the program?
Relevance To determine feasibility of methods to describe program evaluation activities
To what extent are objectives of the program still valid? To what extent are the activities and outputs of the current program consistent with the overall aims of the program and the intended outcomes? How do these activities contribute to the attainment of objectives?
TO FIND AREAS IN NEED OF IMPROVEMENT OR TO CHANGE PRACTICES Implementation—is useful when a new program is being implemented or if data indicate that goals of an existing program are not being satisfactorily achieved. It can be focused on identifying strengths and challenges in the implementation of a program, reassessing the appropriateness of the program under changing conditions, assessing the extent to which the appropriate resources are available; measuring perceptions of the program by the community, staff and participants; determining the quality of the services provided; and monitoring stakeholder’s experiences.
Examples Questions
Implementation To characterize the extent to which intervention or plans were implemented
What are the critical components/activities of this project (implicit and explicit)? How do these components connect to the goals and intended outcomes for this project? What aspects of the implementation process are facilitating success or acting as stumbling blocks for the project? How is the program being implemented and how does that compare to the initial plan for implementation? What changes might be necessary in organizational structure, recruitment materials, support for participants, resources, facilities, scheduling location, transportation, strategies, or activities? Were the required resources available? To what extent was the program implemented according to the core components described in the plan? How competent were the service providers with specific reference to the program’s core components?
Responsive Participatory
To enhance programs cultural competence To very participants rights are protected To mobilize community support for the program
What is the match between what was planned and what was delivered? Is feedback being provided to stakeholders? And what impact is the feedback having on the program design and delivery?
Process Monitoring
To refine plans for introducing a new service To set priorities for staff training To determine whether customer satisfaction can be improved
Is the program achieving its objectives? Is the program measuring up against performance standards? Which aspects of operations have had an impact on the intended beneficiaries? Which factors in the environment have impeded or contributed to the program’s success? How is the relationship between the program’s inputs, activities, outputs and outcomes most accurately explained? What impacts has the program had beyond its intended objectives? What would have occurred if the program had not been implemented? How has the program performed in comparison to similar programs? Are sufficient numbers of the target audience participating in the program? Is more training of staff needed to deliver the program appropriately?
Formative Developmental
To improve content of educational materials To make midcourse adjustments to improve participant logistics
What is working? What needs to be improved? How can it be improved?
TO ASSESS PROGRAM EFFECTIVENESS
Example Questions
Outcome Impact
To document the level of success in accomplishing objectives To assess skill development, knowledge gain, and/or attitude and behavior changes by program participants
What are the critical outcomes you are trying to achieve? What impact is the project having on its clients, its staff, its umbrella organization, and its community? What unexpected impact has the project had? To what extent were the objectives achieved? What factors influence the achievement of objectives? What has happened as a result of the program? What real difference has the activity made to the beneficiaries? How many people were affected?
Summative To decide where to allocate new resources
What results occur? With whom? Under what conditions? With what training? At what costs? Is this program achieving its goals?
Policy To demonstrate accountability requirements are fulfilled
What types and levels of policy need to be changed? Which persons, agencies, and so on do we need to contact and influence? What do they want to hear?
Replicability Transferability
To aggregate information from several evaluations to estimate outcome effects for similar kinds of programs To find out which participants do well in the program?
What is unique about this project? Can the project be effectively replicated? What are the critical implementation elements? How might contextual factors affect replication? Do impacts vary for different groups of people and programs?
Sustainability To compare changes in providers behavior over time
To what extent did the benefits of a program continue beyond the program? What factors influenced the continuance of benefits? What social, political, cultural, economic conditions influence the growth and sustainability of program? What happens when funding runs out? What conditions need to be in place fo
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.