Respond to at least two of your peers’ postings in one or more of the following ways: ‘See attachment for details ? APA citin
Respond to at least two of your peers' postings in one or more of the following ways: "See attachment for details
- APA citing
- No plagiarism
- 48 hours
Week 7 Discussion 1
Evaluating Results and Benefits
Relating Training to Business Performance:
The Case for a Business Evaluation Strategy – William J. Tarnacki II
We will continue to analyze appropriate measures of reaction, learning and confidence. You will also design progress reports that include measures and reports that describe acceptance of an evaluation system or scorecard. In week seven, you will also analyze the importance of using the evaluation process for decision-making and determine the future challenges that the organization might face as they relate to evaluation. I look forward to all your comments!
To prepare for this Discussion, pay particular attention to the following Learning Resources:
· Review this week’s Learning Resources, especially:
· The Role of an Evaluator – See pdf
· Read Week 7 Lecture – See Word doc .
1. 5 Steps – See doc. 5 Steps – See pdf
1. Unconscious bias – See pdf –
Assignment:
Respond to two or more colleagues, “See listed below” in the following way:
· Propose two suggestions on your colleague discussion
· Provide a rationale for your suggestions based on your experience and the Learning Resources for the week.
· 3 – 4 paragraphs
· No plagiarism
· APA citing
1st Colleague – Susan Christmas
Susan Christmas
Week 7 Discussion
Top of Form
The discussion thread for Week 7 is about the role of an evaluator and the necessary steps for becoming an evaluator. Based on our learnings from the week, we are tasked with deciding if we would make a strong evaluator. We are to list the qualifications that we would look for in choosing an evaluator and demonstrate the need for an evaluator for our organization. Finally, we are asked to justify how the addition of an evaluator might improve productivity within the workplace.
Would I make a strong evaluator?
After reading through Chapter 20 of our eBook, I cannot say whether I would make a strong evaluator, but I can say I would have no desire to be a full-time evaluator. Moseley and Dessinger (2010) explain that a full-time evaluator typically does not start their work until after implementation of a project. The evaluator then reconstructs the project and measures the impact of the project initiatives. However, a part-time evaluator starts at the beginning of the project and takes measurements of current levels and desired levels (Moseley & Dessinger, 2010). I think I would be stronger as a part-time evaluator since I would be able to start at the beginning instead of at the end and reconstruct.
Qualifications for Choosing an Evaluator
When choosing an evaluator for my organization, I would be looking for someone with experience in the process of evaluating and someone that has primarily focused on evaluations (a full-time evaluator). Ideally it would be nice if the evaluator was part of a national or international evaluation association because, in theory, they would have a good handle on standards for conducting evaluations (Moseley & Dessinger, 2010).
Need for an Evaluator in our Organization
Using a past employer once again, Echo Bluff State Park, an evaluator was needed so they could come in and analyze the situation, maybe even complete some audits, then report the findings so performance could be improved.
Justification for an Evaluator Improving Productivity
An evaluator might improve productivity because it provides the necessary information that can guide the decision-making process. An evaluation cannot be completed if information is not gathered first. Productivity should eventually improve once decisions are made regarding changes that need to take place.
References
Moseley, J. & Dessinger, J. (2010). Handbook of Improving Performance in the Workplace, Measurement and Evaluation (Volume 3) Hoboken: Wiley.
Bottom of Form
2nd Colleague – Susan Christmas
Piper Stewart
Week 7
Top of Form
Evaluator Qualifications
Evaluator qualifications are critical to a successful evaluation that will strengthen the program’s level of evidence. A single evaluator or a team of evaluators is fine, so long as all the necessary skills are covered. When selecting an evaluator, it helps if the evaluator has worked with similar programs and has demonstrated experience in conducting the specific type of evaluation described in your evaluation plan. When selecting an external evaluator, focus on the program evaluator and the evaluator’s background and qualifications. What is the extent of the evaluator’s experience with both the content area and the type of evaluation you are planning? Identify the evaluator’s experience with similar interventions and with the type of RCT or QED that the evaluation is using (e.g., an RCT in which schools, rather than students, are randomly assigned to treatment or control). List the key people designing and overseeing the evaluation and ensuring its quality along with their education/training and type and years of experience. Verify that the evaluator can handle the scale and size of the proposed evaluation. Provide at least one example of an evaluation that is similar in size, complexity, and number of sites. Discuss the experience the evaluator has in managing similar evaluation protocols (e.g., this type of sampling, data collection, analysis). If relevant, does the evaluator have the capacity to conduct an evaluation with multiple sites across a broad geographic area? You should talk about whether or not there are conflicts of interest related to the evaluation. Conflicts of interest could be related to a part of the program, the evaluator, or the relationship between the two. For example, has the evaluator played a role in designing the program, or is the person supervising the evaluator also responsible for program implementation and success? If there are conflicts of interest, they should be disclosed and measures to mitigate them discussed.
Evaluator Qualifications and Independence Overview. (n.d.). Retrieved from https://americorps.gov/sites/default/files/document/2013_0~1.PDF
Bottom of Form
,
International Education Studies www.ccsenet.org/ies
42
The Role for an Evaluator: A Fundamental Issue for Evaluation
of Education and Social Programs Heng Luo
Department of Instructional Design, Development and Evaluation, Syracuse University 330 Huntington Hall, Syracuse University, Syracuse, NY, 13244, USA
E-mail: [email protected] Abstract This paper discusses one of the fundamental issues in education and social program evaluation: the proper role for an evaluator. Based on respective and comparative analysis of five theorists’ positions on this fundamental issue, this paper reveals how different perspectives on other fundamental issues in evaluation such as value, methods, use and purposes can result in different roles for evaluators, and how such difference can affect evaluators’ responsibilities in different stages of an evaluation. Then the paper proposes its own resolution of the issue of evaluator’s role and discusses its implication and limitations. Keywords: Role for an evaluator, Program evaluation, Fundamental issue 1. Introduction Fundamental issues in evaluation include the purpose of evaluation, the nature of evaluation, the best methods, strategies and tools of conducting evaluation, the practical concerns such as politics, clients and resources and their influence on evaluation, as well as the roles, ethics and responsibilities of evaluators. Fundamental issues are defined as “those underlying concerns, problems, or choices that continually resurface in different guises throughout our evaluation work.” (Smith, 2008, p.2) Although those fundamental issues will resurface periodically in the field of evaluation in new forms and cannot really be solved once and for all; the awareness of the recurring nature of such fundamental issues can help one view the current problems in evaluation from a better historical perspective. By identifying and examining such fundamental issues, evaluators can have a deeper understanding of their importance, constraints and alternative solutions, thus propose a more effective, yet still impermanent resolution for the existing problems. 1.1 Evaluator’s Role as a Fundamental Issue The fundamental issue discussed in this paper is the role for an evaluator. Over the years, many evaluation theorists have proposed different roles for evaluators. For example, Scriven sees evaluator as a “judge” who justifies the value of an evaluand and offers his summative judgment in the final report; while Stake believes an evaluator should be a “program facilitator” who works with different stakeholders and assists them to “discover ideas, answers, and solutions within their own mind”. Campbell prefers a “methodologist” role for an evaluator, advocating rigorous experiment design that yields strong causal inferences; but Wholey believes an evaluator should be an “educator”, whose role is to infuse useful information to the potential users of the evaluation. The emphasis resides not only in the immediate outcome of a program, but also in the inputs, implementation and long-term outcome of the program. However, terms such as “judge”, “methodologist” and “educator” are just metaphors to facilitate understanding and cannot always accurately describe the role an evaluator plays during an evaluation. In fact, evaluators often play different roles in different phases of an evaluation. For example, an evaluator can be a judge during the phase of selecting criteria of merit, a methodologist when collecting data, a program facilitator during the program implementation, and an educator during the results dissemination. The roles an evaluator takes during an evaluation reflect his or her beliefs in other fundamental issues such as theories, values, methods, practice and use, etc. Other issues such as education background, previous working experiences, the nature and setting of social programs might also contribute to shaping the proper roles for an evaluator. 1.2 Importance of Evaluator’s Role as a Fundamental Issue Just like any other fundamental issue in evaluation, there is no final resolution to defining the proper role for an evaluator. However, studying the different roles an evaluator can play, as proposed by different theorists, is still quite important for evaluators, clients and evaluation as a profession. Turning first to evaluators, studying the different roles for an evaluator is actually studying the different approaches of conducting evaluation. An evaluator’s role is not self-claimed; rather it is defined by the things an
International Education Studies Vol. 3, No. 2; May 2010
43
evaluator does during the evaluation. For instance, we wouldn’t use the metaphor “judge” to describe the evaluator’s role in Scriven’s theory if evaluators’ job doesn’t include determining the criteria of merits, setting comparative standards and giving a final summative judgment. Or we wouldn’t compare the role of evaluator in Weiss’s theory to an educator if providing “enlightenment” (Note 1) is not a primary task for her evaluators. The familiarity with different roles an evaluator can play allows one to take a more flexible approach to conduct evaluation according to specific context, the nature of social program, available resources, and different client expectations. As for the clients of an evaluation, the awareness of the different roles of an evaluator can play will help them select the right candidate according to their specific needs; and reach an agreement with the selected evaluator about his/her job responsibilities as well as the obligations clients shall make in order to facilitate the evaluation process. For example, for a program that does not welcome intrusion, an evaluator who prefers doing an experiment might not be the best candidate. For an evaluator who prefers the role as a “program facilitator”, program administrators should anticipate frequent meetings with the evaluator and make incremental changes according to his/her feedback. The basic knowledge about evaluator’s roles is especially important in today’s world of globalization, where evaluation in a different nation or culture becomes more common. Clarifying the role an evaluator should play beforehand is a good way to avoid surprise, misunderstanding, and conflict later on. Finally, evaluation as a profession will benefit from a deeper understanding of the roles of evaluators. How is an evaluator different from a social scientist? Can a methodologist be hired to do the job of an evaluator? What is the difference between evaluation and research? What are the competencies that are unique for an evaluator? Those questions are raised due to the lack of distinction between evaluation and other social science professions. Explicating the proper roles for a professional evaluator will be a good approach to address such distinction and solidify the status of evaluation as a profession. 2. Theorists’ Positions on the Fundamental Issue of Evaluator’s Role Many theorists in the field of evaluation have different opinions regarding the proper roles for an evaluator. Their opinions on this issue reflect their overall philosophy about doing evaluation as well as their stances on other fundamental issues in evaluation. This section will first discuss the resolutions proposed by different theorists regarding the role of evaluator, analyzing the strength and weakness of each resolution. Then a comparative analysis will be conducted to study the positions across those theorists. 2.1 Scriven Scriven believes that an evaluator’s role is to investigate and justify the value of an evaluand. Such investigation and justification shall be supported with joining empirical facts and probative reasoning. “Bad is bad and good is good and it is the job of evaluators to decide which is which” (Scriven, 1986, p.19). He rejects the notion that an evaluator’s role is simply to provide information to decision-makers and claims that “the arguments for keeping science value free are in general extremely bad” (Scriven, 1969, p.36). According to Scriven, an evaluator’s responsibilities during an evaluation include:
• Determining criteria of merit from needs assessment. Criteria of merit of an evaluand should be its capacity to meet needs. Although an evaluator can use the results of needs assessment conducted by a program developer, sometimes he/she should do an independent needs-analysis. To avoid bias, Scriven advises evaluators to conduct “goal-free” evaluation and formulate questions by ignoring the program goals and looking for all possible effects an evaluand could have.
• Setting comparative evaluation standards. A set of standards should be created by evaluators to assess the program performance. Such standards are used for comparison, either comparison with a set level of performance, or with alternative programs. The latter comparison is preferred by Scriven since he believes that an evaluator will usually make decisions about choosing among alternatives.
• Assess program performance. An evaluator will need to answer both the evaluative and non-evaluative questions. Evaluative questions focus on the effects of the program and should be given top priority. The evaluator should acquire the skills to collect and analyze both experimental and non-experimental data.
• Offering a final evaluative judgment. An evaluator should synthesize his findings into a final report and offer his/her summative judgment. Strength and Weakness of Scriven’s Position: Scriven differentiates evaluators from researchers or social scientists by emphasizing that the value judgment is an integral part of an evaluator’s role and grounds such role
International Education Studies www.ccsenet.org/ies
44
in the logic of evaluation. His “goal-free” evaluation allows evaluators to identify possible side effects of the evaluand and address the concerns of underrepresented stakeholders. However, besides giving evaluators higher authority over different stakeholders in value judgment, Scriven fails to provide a solution to eliminate personal biases of evaluators. Metaevaluation proposed by Scriven is a good attempt but still it is highly subjective and requires years of experiences and expertise for an evaluator to make a non-biased judgment. For the novice evaluator, the decision of whose needs should be considered and which merit should take higher priority can still be very arbitrary. Besides, a complete goal-free evaluation is also highly unfeasible when an evaluator is hired by his/her clients and has an obligation to answer their specific inquiries. 2.2 Campbell Campbell believes that evaluators should play a role of methodologist during the program evaluation (Shadish, 1991, p.141). Evaluators should use scientific methodologies to design evaluative research that eliminate biases and establish a causal inference about a program and its hypothesized effects. This role of methodologist advocated by Campbell requires evaluators to employ a strong research design such as randomized experiment or good quasi-experiment to determine the causal effectiveness of the program. (Shadish, 1991, p.129) An evaluator should also distance him/herself from the program stakeholders and work independently to find out the facts about the program. As for the dissemination of the evaluation findings, an evaluator should “write honest reports for peers even if they cannot do so for funders or the public.” (Shadish, 1991, p.162) Last but not least, it is also the obligation of evaluators to play an active role in scrutinizing, replicating, and debating the evaluation results. Campbell’s emphasis on methods of measuring the program outcome makes him less concerned about assigning values to the program or facilitating the use of evaluation. As a result, he believes an evaluator is not responsible for doing the following:
• An evaluator is not obligated to assign value to the program being evaluated. Valuing of evaluation results should be left to the political process, not researchers. (Shadish, 1991, p.160).
• An evaluator shouldn’t promote use of her evaluation results actively “since this detracts from the credibility of the more factlike findings.” (Shadish, 1991, p.162).
• It is up to the policy maker, stakeholders to decide how to interpret, disseminate and use the evaluation results.
• An evaluator is not obligated to generate a different or modified program worth testing. Her job is simply testing the efficacy of existing programs.
• An evaluator should avoid evaluating institutions, social organizations, or persons due to the almost inevitable corruption pressure. (Campbell, 1984, p.41). Strength and Weakness of Campbell’s Position: The methodologist role Campbell assigns to evaluators is echoed with the proposal for conducting “scientifically based evaluation” as advocated by the Department of Education. The role of a methodologist as defined by Campbell focuses on the internal validity of the causal inference while is less concerned about the prescribing values and utility of the evaluation findings therefore is quite suitable for an external evaluation regarding program outcome. Such role for an evaluator will also greatly enhance the scientific nature of evaluation as a profession. Nevertheless, the weaknesses are also quite obvious for such role of an evaluator. First of all, it is hard to distinguish evaluation from other social science research if one sees an evaluator merely as a research methodologist. Apparently, not every social scientist can do a good evaluation. Secondly, doing a rigorous experiment design is preferable but not always feasible. The cost and time for doing a randomized control experiment, as well as its intrusion into program might result in fewer and fewer evaluation being done due to the reluctance from program administrators. Last but not least, the methodologist role restricts evaluators to study only the outcome of the program while missing other key information such as how the program is implemented, or which element of the program works and doesn’t work. As a result, an evaluator couldn’t give advice about how to improve the program or adapt the program to fit other contexts. 2.3 Stake Stake believes an evaluator should play a facilitator role during the evaluation. The evaluator should assist different stakeholders to “discover ideas, answers, and solutions within their own mind” by conducting
International Education Studies Vol. 3, No. 2; May 2010
45
responsive evaluation. (Stake & Trumbull, 1982, p.1) According to Stake, the responsibilities of an evaluator include:
• Identifying the stakeholders for whom the evaluation will be used: The evaluator should have a good sense of whom he is working for and their concerns. (Stake, 1975, as cited in Shadish 1991, p.273). Minority stakeholder groups should also be included to ensure justice and fairness.
• Spending more time observing the program and providing accurate portrayals of the program using case studies. Because case studies reflect the complexities of the reality, they help readers to form their own opinions and judgments about the case and they can be “useful in theory building”. (Stake, 1978, as cited in Shadish 1991, p.289)
• Conducting responsive evaluation which allows evaluation questions and methods to emerge from observing the program. In this approach, evaluators will orient evaluation directly to program activities than to the program goals and respond promptly to audience information requests.
• Presenting his evaluation findings in the “natural ways in which people assimilate information and arrive at understandings” so that the writings can reach maximal comprehensibility. (Stake, 1980, p.83) Stake doesn’t believe an evaluator should make a summative value judgment since there is “no single true value” for all the stakeholders of a program. (Stake, 1975, as cited in Shadish 1991, p.274) As a result, evaluators shouldn’t blindly accept state and federal standards and impose treatments on local programs since such standards are not pluralistic and might not be in the best interest of local people. (Shadish, 1991, 279) Stake also believes the responsibility of synthesizing and interpreting case studies lies in the readers rather than evaluators and it is up to the readers to resolve any conflicting arguments. (Shadish 1991, p.293) Strength and Weakness of Stake’s Position: the facilitator role for an evaluator, as suggested by Stake, has two major strengths. First, it indicates the interest shift among evaluators from giving a summative judgment, whether it is a value judgment or an effect judgment, to generating useful information that can be used to improve the program. Secondly, it justifies new ways to conduct an evaluation (e.g. responsive evaluation, case study) and report its findings (e.g. narrative portrayal). However, Stake fails to take into account clients’ expectations about the proper role for an evaluator. Will clients accept the case study as the only approach of investigation? Will clients allow evaluators to start evaluations without preordinate questions? Is it appropriate for an evaluator to completely ignore the state or federal standards when evaluating local programs? All those doubts regarding the feasibility and validity of case studies or responsive evaluation will also undermine the social acceptance of the evaluator role proposed by Stake. 2.4 Weiss Weiss emphasizes the evaluator’s special role in promoting the use of his/her evaluation results, especially in the policy-making process. She is frustrated about the fact that “evaluation results have generally not exerted significant influence on program decisions”, and she argues that evaluation should start out with use in mind and evaluators shouldn’t leave the use of evaluation to the natural processes of dissemination and application. (Weiss, 1972, as cited in Shadish 1991, p.182-183) Weiss claims that evaluation “should be continuing education for program managers, planners and policy makers”. (Weiss, 1988, p.18) As a result, it seems that she sees the role of evaluator more as an educator, who conducts evaluation not for giving an explicit solution to a social problem, but for providing useful information to its potential users, policy-makers in particular. She urges evaluators to look beyond the instrumental use of evaluation results and conduct “enlightenment” research that “provides evidence that can be used by men and women of judgment in their efforts to research solutions” (Weiss, 1978, p.76) so as to maximize the utility of evaluation results. By doing evaluation this way, an evaluator should
• Assess the likelihood that evaluation results might be used. (Shadish, 1991, p.198)
• Ask questions that can “provide an intellectual setting of concepts, propositions, orientations, and empirical generalizations” for policy making. (Weiss, 1978, as cited in Shadish 1991, p.202)
• Use well designed qualitative and quantitative methods to conduct evaluation study with emphasis not only on the immediate outcome of a program, but also on the inputs, implementation and long-term outcome of the program. (Shadish, 1991, p.205)
International Education Studies www.ccsenet.org/ies
46
• Draw policy implications from evaluation research by compiling separate summaries to multiple stakeholders with knowledge and information that best interest them. Make recommendations for future programs from the data of evaluation results. (Shadish, 1991, p.205-206) Strength and Weakness of Weiss’ Position: Weiss further differentiates the role of evaluator from the role of a researcher by addressing the complex political context that besets social programs. She warns evaluators against political naivety and urges them to do evaluation that can be used in policy-making, in the form of “enlightenment” rather than “instrumental use”. The educator role she assigns to evaluators reflects her pragmatic view of evaluation and suggests a new mode that evaluation can be used. However, the role of evaluator proposed by Weiss has some intrinsic flaws. First, such role fails to take into account of the variety of different contexts (ironically). For instance, the decision for the state or federal government to hire an educator is often not for the purpose of “being educated”, but to get concrete data regarding the program effect. The proposal to conduct “scientifically based evaluation” made by the Department of Education is a good example of that. As a result, an evaluator who uses case studies to describe the program input, implementation and long-term effect might not by appreciated by policy-makers in this context. Secondly, her emphasis on providing information to policy-makers poses the danger of evaluators becoming the servant of that particular stakeholder group. What should be the role of an evaluator when the interests of different stakeholder groups conflict with each other and speaking for the underrepresented group might limit the use of evaluation results in the policy-making process? 2.5 Rossi Rossi didn’t give an explicit definition about the role of evaluators. Rather, the roles an evaluator shall play might vary according to different stages of evaluation. For example, in the program conceptualization stage, an evaluator sometimes takes the role of a social scientist, incorporating social science theories into the development of an intervention model. (Shadish, 1991, p.389-391) In the stage of program implementation, an evaluator works as a program administrator, making sure the program is implemented as expected so as to “rule out faulty implementation as a culprit in poor program outcome”. (Shadish, 1991, p.381) Besides, the operational data collected this way can also be useful for the future dissemination of the program. When determining the program utility, an evaluator will take the roles of a methodologist and a project manager, who selects and applies appropriate research methods to assess the impact of program intervention as well as conducts efficiency analysis about the program such as cost-benefit and cost-effectiveness analysis. (Rossi & Freeman, 1985, p.327-328) The different types of social programs will also affect the roles an evaluator plays during the evaluation. Rossi has categorized social programs into three types: innovative, established, and fine-tuning programs. For instance, when evaluating innovative programs, much emphasis is given to the conceptualization of the program. (Shadish, 1991, p. 404) An evaluator’s responsibility will include setting program objectives and constructing an impact model between program objectives and activities, which should be based on not only the stakeholder’s views, but also the results of needs assessment and social science theories. However, conceptualization is rarely the focus of evaluation of established programs since their conceptual frameworks already exist and are less likely to change. (Shadish, 1991, p. 404) Instead, an evaluator will take a more summative approach and much of his/her responsibility falls into judging the program accountability. The role of an evaluator is less summative in fine-tuning programs, with emphasis on identifying the needs for change and formative modifications. Rossi’s attempt to integrate the works of various theorists into one theoretical framework also helps to shape his position on the issue of the proper role for an evaluator. Rossi appreciates the strengths of different roles
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.