What are criteria of merit that could be used to evaluate social welfare policies and programs?
Policy Analysis And Advocacy Skill Building: Bringing It Altogether- Evaluative Criteria & Evidence Gathering
Objectives:
Evaluate sources that inform policy change regarding a social issue. [EPAS 5]
Describe the underlining aspects that reveal the essence of a social issue. [EPAS 3]
Questions 1
What are criteria of merit that could be used to evaluate social welfare policies and programs?
This discussion question is informed by the following EPAS Standard:
5: Engage in Policy Practice
Questions 2
Describe the difference between formative and summative evaluation and discuss why both are important when developing and implementing social welfare policies and programs.
Questions3
Literature Review for Policy Brief (Obj. 7.1 and 7.2)
View Rubric
Description
IIn Topic 8, you will be completing a policy brief related to one of the content areas covered throughout this course. You will use this literature review to complete your policy brief for Topic 8. Select a social issue/problem with a clearly delineated topic in one of the 10 social welfare content areas and identify a topic suitable for proposing a policy change. You will use this same social welfare problem in the Topic 7 and 8 assignments.
This is a two-part assignment.
Using the GCU library, conduct a literature review in which you identify, review, and list five current (published within the last 5 years) peer-reviewed scholarly sources of relevance to your topic in APA reference format.
Prepare a “statement of the issue” (300-500 words), supported with information drawn from your selected scholarly sources.
This assignment uses a rubric. Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion.
You are not required to submit this assignment to LopesWrite.
This assignment is informed by the following EPAS Standard:
5: Engage in Policy Practice
Social Welfare Policy and Advocacy: Advancing Social Justice Through Eight Policy Sectors
Read Chapters 4 and 5 from Social Welfare Policy and Advocacy: Advancing Social Justice Through Eight Policy Sectors.
Requirements: 1000+
Participatory and Inclusive Approaches to Disability Program Evaluation Sally Robinsona *, Karen R. Fisherb & Robert Strikec a Centre for Children and Young People, Southern Cross University, Lismore, New South Wales, Australia; b Social Policy Research Centre, University of New South Wales, Kensington, New South Wales, Australia; c personal Abstract Some evaluations of disability programs now apply participatory methods to include people with cognitive disability in the collection of data. However, more inclusive approaches that engage people with cognitive disability more fully in the decisions about the evaluation remain rare. We examined why this may be the case, using Weaver and Cousin’s criteria for inclusive evaluation to measure the depth of inclusion of our methods in an evaluation that we did that included people with cognitive disability. We found that the participatory methods in the design supported some of the dimensions of inclusive evaluation–diversity, depth of participation, power relations, and manageability. Relying on other people to represent the interests of people with cognitive disability in the governance, data collection, and dissemination compromised the control dimension of inclusion. Resources and commitment to build the capacity of people with cognitive disability as team members, mentors, advisers, and direct participants is required to make inclusion feasible and an expectation in disability program evaluations. Keywords: Disability Policy; Inclusive Research; Transformative Evaluation; Participatory Methods; Reflective Practice Few people who use social support programs are meaningfully engaged in the evaluation of programs relevant to their lives, including people with cognitive disability (Beresford, 2002; Mertens, 2009). Inclusive approaches to evaluation aim to engage the people who are intended to benefit from social support programs as active agents in evaluation processes with the transformative goals of improving the programs in their interests. The approaches can offer opportunities for increased breadth and quality of data, an ethical schema, a clear conceptual and methodological framework for practice, and the potential for addressing the human rights and social justice of marginalised groups (Weaver & Cousins, 2004). Accepted 1 August 2013 *Correspondence to: Sally Robinson, Centre for Children and Young People, Southern Cross University, Lismore, NSW, Australia. Email: [email protected] Australian Social Work, 2014 Vol. 67, No. 4, 495–508, http://dx.doi.org/10.1080/0312407X.2014.902979 © 2014 Australian Association of Social Workers Evaluations of disability programs in Australia make limited use of these approaches and of the substantial body of knowledge about the benefits of inclusive research, which promote the inclusion of people with disability in both research practices and processes (Bigby & Frawley, 2010; Goodley, 2004; Walmsley & Johnson, 2003). Instead, general evaluation practice is that if people are included, it is usually as program participants in consultation and data collection. There is little evidence of disability program evaluation that attempts to implement inclusive principles or to translate inclusive evaluation theory into practice. Barriers to participation are heightened for people with cognitive disability—the focus of this article. References to people with disability in this article are about people with cognitive disability, including people with intellectual disability, acquired brain injury, psychiatric disability, specific learning disabilities, and people on the autism spectrum (Jacobson, Azzam, & Baez, 2012). In this paper, we look at possibilities for moving from participation in data collection by people with disability to more inclusive approaches in evaluation (Mertens, 1999, 2010; Weaver & Cousins, 2004). Through analysis of work we did that used participatory evaluation methods, and our reflections on our experience, we identify opportunities and challenges in moving beyond participatory mechanisms towards more inclusive evaluation processes, and discuss some tensions and considerations of such a shift within the constraints of commissioned program evaluation. Disability Program Evaluation and the Influence of Inclusive Disability Research There is little academic literature about the experience of people with cognitive disability in evaluation, either as program participants or evaluators (Fisher & Robinson, 2010; Jacobson et al., 2012; Minkler et al., 2008). Two separate, but related sets of literature contextualise inclusive disability evaluation. The first of these is inclusive disability research, largely from the UK (Boxall, 2002; Goodley, 2004; Walmsley & Johnson, 2003). The second is the USA theory on inclusive and transformative evaluation, informing the ethics and practice of evaluators (Cousins & Whitmore, 1998; Mertens, 2009, 2010; Ryan, Greene, Lincoln, Mathison, & Mertens, 1998). These two approaches share the values of inclusive practice but come from different disciplinary starting points. Evaluation theorists distinguish evaluation practice as a specific type of research (e.g., Clarke & Dawson, 1999). As a result, disability research is broadly positioned within a continuum of participatory to emancipatory research, and evaluation within a continuum of participatory to transformative evaluation. We think there are often differences in design and practice between evaluation and research that set them apart from one another in a number of ways. For example our evaluation work is often more tightly constrained by terms of reference, goals set by government, and short timeframes. The aims and methods of our research projects have been more within our control, and the timelines more negotiable. Below we 496 Robinson et al. briefly explore both inclusive research and evaluation. We turn first to a discussion of inclusive disability research in setting expectations about the inclusion of people with cognitive disability, before introducing concepts from inclusive evaluation. Inclusive Disability Research Inclusive disability research has grown from multiple strands of the disability movement—from the social model of disability, normalisation and social role valorisation theory, and the self-advocacy movement (Barnes & Mercer, 2004; Walmsley & Johnson, 2003; Williams & Simons, 2005). People with cognitive disability have not always been well served by some of the broader inclusive research debates, which have not adequately problematised the impact of cognitive impairment to inclusion (Boxall, 2002; Stalker, 1998). Early inclusive research projects were predominantly from the UK, and many were based in life stories and the experience of self-advocacy (Atkinson, 2004; Williams, 1999). A second generation of research and critical reflection on inclusive research analyses some of the inherent challenges in doing research inclusively with people with cognitive disability, such as the role of theory generation, inclusive analysis of data, who benefits from the outcomes of research, and the importance of “unpacking” collegial support (Goodley, 2004; McClimens, 2008; Nierse & Abma, 2011). Bigby and Frawley (2010) provided valuable reflection and analysis of inclusive practice in their research relationship with their colleague Alan Robertson. The roles are heavily relational and rely on ongoing reflection and willingness by academic researchers to cede control and to commit resources to making the research relationship function well, as well as a long-term focus on capacity development at a range of levels. Of the authors of this paper, Sally Robinson and Karen Fisher were academic researchers who conduct evaluations and Robert Strike, who has an intellectual disability, was a coresearcher. We have worked together as researchers and evaluators since the 1990s, including commissioned evaluations about programs that include people with cognitive disability. We have applied a participatory approach to the design of these projects, within the constraints of commissioned evaluation. Sally has also worked with Robert on disability research projects over the same period, including inclusive research. During this time, Sally has used her reflections with Robert through their ongoing research relationship to inform our evaluation practice, including projects Robert does not work on. The crossover of advice and reflection from our relationships and experience in participatory and inclusive research has changed our evaluation practice, as explored in this article. More recently, we have talked about our experiences of working together on both inclusive evaluation and research in facilitated public workshops for people with and without disability (Fisher et al., 2011; O’Brien, Fisher, Robinson, Kayess, & Strike, 2011; Robinson et al., 2011). The role of people with cognitive disability in evaluation teams as coevaluators was discussed. Robert says that involvement as a coevaluator is important because from his perspective: Australian Social Work 497 . People have the right to be involved in finding out about their lives. . It changes the way that people think about people with cognitive disability. . It proves to people that you can do it, and you get the chance to do it. . People with cognitive disability have a different way of doing things—they understand the way evaluation should be put together differently. They come at it from a different angle. . People’s experience is valuable—it’s important to have lived experience. . People with disability in the program may feel more comfortable talking to someone who has the same kind of experiences in their life. . You get different information from people when someone with cognitive disability asks them. . People can understand what they are being asked, because you don’t use too many big words—it makes it easy. . More people find out about evaluation and research, and get more involved (i.e. those with and without cognitive disability). Robert’s comments are about the positioning of people with cognitive disability in key decision making, such as the way the evaluation is organised, and who does what part of the work to get the best quality information to evaluate the program. These points are also consistent with the findings of other coresearchers in inclusive disability research (Kramer, Kramer, Garcia-Iriarte, & Hammel, 2011; Williams, 1999; Williams & Simons, 2005). Inclusive Evaluation and the Place of Transformative Evaluation Theory In addition to these developments in inclusive disability research, parallel theory and practice changes have also emerged in evaluation. Inclusive evaluation principles are most strongly articulated in transformative evaluation concepts. Mertens (1999, 2009, 2010) described the evaluator working in the transformative paradigm as one who “consciously analyses asymmetric power relationships, seeks ways to link the results of social inquiry to action, and links the results of the inquiry to wider questions of social inequity and social justice” (1999, p. 4). The foundations of the transformative paradigm are a useful conceptual base for thinking about ethics, how evaluators understand reality, how knowledge is created, and how knowledge is obtained. At a broad level, taking an inclusive approach to evaluation involves systematic investigation of the merit, or otherwise, of programs or systems, with the aim of enabling decision making and at the same time facilitating positive social change for less advantaged groups. It includes specific mechanisms to recognise and understand cultural norms and contextual complexity, to redress power imbalances, and to support and sustain the meaningful involvement of people who traditionally have been under-represented or excluded as stakeholders in evaluation (Baur, Abma, & Widdershoven, 2009; Mertens, 2010; Weaver & Cousins, 2004). 498 Robinson et al. Participatory and Inclusive Evaluation Participatory evaluation encourages the participation of stakeholders in order to increase the quality of support in decision making, relevance, ownership, and utility (Cousins & Whitmore, 1998). Inclusive approaches engage more deeply with people (in this case with cognitive disability) in meaningful roles in evaluation, including as an evaluator, a member of a management or advisory group, adviser, or consultant. In these roles, their experience and expertise can inform the way the evaluation is designed and managed; data collection strategies and instruments are developed and implemented (e.g., plain English surveys, analysis or outputs); and data are analysed, presented and disseminated (e.g., accessible outputs and training) (Fisher & Robinson, 2010; Kramer et al., 2011). In Australia, there appears to be little guidance for evaluators seeking to conduct “participatory” or “inclusive” evaluation in disability policy, and it is not uncommon to hear of evaluations referred to as participatory because people with cognitive disability are included in the cohort of people consulted or included as respondents. The terms themselves are contested as used by evaluators, scholars, and practitioners in different contexts (Mertens, 2010; Walmsley & Johnson, 2003). In a database and policy scan of academic and grey literature about recent Australian evaluations of social support programs directed towards people with cognitive disability, we found that most frequently no methodological approach was described (rather, the range of data collection methods used in the evaluation were detailed). Where methodology was described, action research and participatory action research predominated. However, little evidence of inclusive methods beyond the inclusion of people with cognitive disability as respondents in data collection was found. A small number of evaluations involved people with disability on steering committees or advisory groups, but only one we could locate in this (admittedly small-scale) scan included people with cognitive disability in an advisory capacity (Milne et al., 2009), in addition to Bigby and Frawley (2010) above. This is consistent with Jacobson et al.’s (2012) analysis of the USA evaluation, which found that people with cognitive disability were more likely to be included in the latter stages of the process, and people with low support needs, rather than high needs, were more likely to be involved. Framework for Measuring Inclusivity in Evaluation Weaver and Cousins (2004) analysed a range of collaborative or inclusive approaches to evaluation and research, from which they developed a schema to measure the depth and quality of participation and engagement of stakeholders within the evaluation. They identified three primary goals for inclusive approaches to evaluation—the knowledge gained through inclusive inquiry should be useful; it should be concerned with ameliorating social inequalities; and it should be aiming to produce reliable or robust knowledge. Australian Social Work 499 To achieve these three goals for inclusive inquiry, Weaver and Cousins (2004) provided a framework for measuring the depth and quality of inclusive evaluation. The dimensions of the framework are: 1. Control of technical decision making 2. Diversity among stakeholders selected for participation 3. Power relations among participating stakeholders 4. Manageability of evaluation implementation 5. Depth of participation. This framework does not appear to have been applied to evaluation that includes people with cognitive disability. Daigneault and Jacob (2009) used three of these dimensions to assess the authority and influence of participants, diversity of participants, and extent of involvement as measures of inclusivity. Their work has been applied in participatory evaluation in other domains, such as education (Pietilainen, 2012). In this article, we used Weaver and Cousins’ framework to highlight opportunities and challenges to inclusive evaluation for people with cognitive disability. We applied the framework to a case study of a participatory evaluation Karen and Sally completed of an Australian disability program to provide an exemplar case. Resident Support Program The example we used is the evaluation of the Resident Support Program (RSP) in Queensland, Australia. We used it as an example here because it was complex, yet reliant on the perspectives of people with cognitive disability for evaluation results. RSP provides external support services to residents with disability living in private residential services (unfunded boarding houses and hostels). The support is community access, personal care, and health and wellbeing. The Government commissioned a mixed-method evaluation of the pilot program to develop cost and outcomes data to inform future development (Fisher, Abelló, Robinson, Siminski, & Chenoweth, 2005). The RSP was run by two state government departments, with eight nongovernment organisations providing the support in five locations. Stakeholders included the people using the support (residents), premises owners, advocates, service providers, and policy makers. Many of the residents had complex needs—of the 32 people in repeat interviews, 42% had been assessed as having intellectual or cognitive disability, 73% with psychiatric disability, and 64% had multiple disabilities. Other residents not included in these figures had not received formal diagnoses of disability. While the evaluation included people with a range of disability types, our focus in this paper included people with cognitive disability, as this is where we found the barriers to participation highest. The evaluation was conducted by a six-member university consortium over 18 months, using a mixed method participatory approach with the aim of 500 Robinson et al. understanding the implementation of the program; the services provided to residents; how residents perceived services and the impact on their quality of life, health, and wellbeing; and the impact on residential facility operators and staff, as well as other human services providers and departments. One member of the evaluation was a person with psycho-social disability, but the evaluation team did not have a person with cognitive disability due to budgetary, travel, and time constraints. We addressed this constraint by drawing on our learning from our other more inclusive projects, but this omission limited our inclusivity in important ways. The evaluation design was formative, including process, outcomes, and cost effectiveness measures. The mixed methods included interviews with stakeholders; observation of the program, meetings, and places of residence; and analysis of administrative and financial data. In addition to collecting quantitative data about service use, service users, and cost, 36 people who most recently entered the program participated in three semistructured interviews at three month intervals. This sample was selected to be large enough for a range of characteristics and to support the costeffectiveness analysis. The methods are described in full in Abelló, Fisher, Robinson, and Chenoweth (2004), although the evaluation method details are only indirectly relevant to the analysis in this article. During the design phase we incorporated inclusive goals by using participatory approaches in the plan, management, and conduct of the evaluation (Abelló et al., 2004). Our method of analysis for this article was to apply Weaver & Cousins’ inclusive evaluation dimensions of form (above) to the public outputs from the evaluation (evaluation plan; data collection instruments; baseline, interim and final reports; and journal article) and reflective discussion between the evaluators. Experience of the Dimensions of Inclusive Evaluation Control of Technical Decision Making On the spectrum from participatory to inclusive evaluation, involvement of people with cognitive disability ranges from participation in processes determined by people without disability through to engagement with the design, implementation, and sharing of outcomes. In commissioned disability policy evaluation, constraints on inclusive practice included the relationship between funder and evaluators, the terms of reference of the evaluation, and the scope for choice in evaluation methodologies. In addition, for people with cognitive disability, involvement in evaluation can risk their own interests if the evaluation framework inadequately considers social justice and human rights imperatives (Goodlad & Riddell, 2005). In the RSP evaluation, control remained largely with the funding agencies and academic evaluators. To increase inclusivity, input from stakeholders representing the interests of people with disability were sought at multiple points, including design, reporting, and oversight. However, most control remained with people acting on behalf of people with cognitive disability. Strategies that went part of the way to Australian Social Work 501 redressing the lack of control included acknowledging the limitation and arguing for the formation of an advisory group with membership of advocates for people with disability; discussing the evaluation process with residents whenever we visited the boarding houses; and designing the evaluation process, instruments, and data collection with the same principles that we had learned from working with coresearchers with cognitive disability on more inclusive disability research projects (Robinson, Hickson, & Strike, 2001; Robinson & Walker, 1997). Diversity among Stakeholders Selected for Participation It is well recognised that inclusive approaches are often limited to people who are skilled, resourced, and supported (Boxall, 2010; Redley & Weinberg, 2007). At the same time, some people will not contribute to evaluation—either through preference, social circumstance, or through capacity. For people with high and complex support needs due to disability, the limitations of inclusive approaches need to be acknowledged, alongside creative strategies for maximising the participation of these people in evaluation. The RSP evaluation goal on the dimension of diversity among stakeholders was to reach people using the program who would not normally have an opportunity to contribute to the evaluation. Residents with a range of disability support needs participated in three interviews at three monthly intervals about their experience of the program. The data collection with residents was more detailed than from other stakeholders. We used easy-read information and consent forms, informed by our inclusive research experience, to make understanding easier. The evaluation was reasonably inclusive on this dimension, particularly by including opportunities for observation, and strategies for diverse representation in governance. A benefit was that the methods revealed unexpected information that could be acted on through the formative evaluation, particularly about ill-treatment or poor practice. A limitation was that the contacts with residents were brief and occasional. Interviewing the residents three times helped to build rapport, as did holding interviews with diverse stakeholders in the same location to contrast multiple perspectives in the data. Power Relations among Participating Stakeholders Power relations for people with cognitive disability are often difficult to navigate. Supportive relationships that create conditions of alliance and respect for the contributions of coevaluators and program participants with cognitive disability are prerequisites (Bigby & Frawley, 2010; Walmsley & Johnson, 2003). Managing competing stakeholder interests and voices, addressing risks, and responsiveness to input are key issues for evaluators. Space precludes a more complete discussion of this important issue (addressed in more detail in Fisher & Robinson, 2010). The RSP evaluation goal on this dimension was to design and implement the evaluation in a way that acknowledged and addressed the power imbalances in the evaluation and program. The program was part of the reform of the private residential sector, 502 Robinson et al. a fraught policy context. In this climate, the voices of people with disability were not primary. Using formative evaluation was helpful in addressing unequal power relations. On several occasions program managers responded in a way that enabled us to demonstrate to residents that sharing their experiences could improve their services. For example, in the first round of interviews, some people said they did not like having service providers shower them at 2 p.m., and preferred them to come at the beginning of the day to be ready to go out. This type of information from interviews informed policy makers’ decisions to change the program during the evaluation in response to the feedback from people with disability (Fisher & Robinson, 2010). However, little opportunity, and some risk, remained for residents (the stakeholders with the least power) to speak for themselves within decision making structures, both locally at the premises and in the evaluation committees. We attempted to address these limitations by thinking about the possible implications of our actions on residents—for example, briefing the Government and advocates about sensitive findings about conflict before the formal meetings that included multiple stakeholders, such as apparent unethical practices that needed further investigation, and implications of the evaluation findings for broader sector reform. Manageability of Evaluation Implementation Commissioned evaluation often involves tight timelines, negotiated ahead of time, with little room to manoeuvre in response to delays or unexpected findings. These are often not conducive to inclusive evaluation processes. In the RSP evaluation, manageability also included ethical practices, such as a disclosure protocol to respond to apparent abuse, neglect, and harm; responsibility to manage privacy and confidentiality; assistance to people with complaints not related to the evaluation topic; and reimbursement for their participation in the evaluation. On reflection, in several areas we focused too much on manageability from a bureaucratic or organisational point of view, at the expense of more inclusive practice. For example, we arranged follow up visits for only some participants to explain the impact of their contributions; and the summary report for participants was not available in accessible formats. Depth of Participation The RSP goal for depth of participation was that it be sufficiently deep to reveal unexpected information useful for the evaluation questions. The year-long evaluation gave us the opportunity to talk with people in real time about their experiences, keeping the discussions concrete and developing rapport through repeat visits. Observing and meeting other people living in the same facilities while interviewing residents also gave us the opportunity to understand the environmental pressures and realities more clearly. However, the design did remain unreflective about the depth of participation. Engagement in the design, higher level implementation of the project, Australian Social Work 503 and dissemination of results by coevaluators with cognitive disability would undoubtedly have enriched the depth and quality of inclusive practice. Constraints on the Goals for Inclusive Evaluation Returning to Weaver and Cousins’ (2004) framework for inclusive inquiry, we asked whether the RSP evaluation met their three related goals for inclusive evaluation practice—utility, social justice, and inclusive practice. First, was the knowledge gained through the evaluation useful in solving problems, making decisions, or making policy? The program changed in response to the evaluation. The program operation is now more streamlined, and is more responsive to the expressed needs and aspirations of residents. In evaluations we have conducted subsequent to this one, government agencies have been increasingly receptive to participatory approaches in evaluation, indicating a growing confidence in the approach on the part of policy makers. The second goal was that the evaluation contributed to the amelioration of social inequalities. Focusing on the voice of people with cognitive disability served to highlight problems that illustrated significant marginalisation, which motivated policy action about the program based on the evaluation findings. For participants, some personal level problems were addressed through being interviewed, such as access to services and advocacy. Howerever, most of the benefits came from their participation in the program itself, rather than the evaluation. The social impact of the evaluation was limited to changes to the program, while the policy and social context continued to disadvantage the residents. Weaver and Cousins’ (2004) final goal of inclusive evaluation was to contribute to the production of robust knowledge or to revealing underlying social phenomena from the perspective of the participants. On measurement of participation, the evaluation measures reasonably well. However, as an inclusive approach to evaluation with a goal of producing new knowledge from the perspective of lived experience, it is clear that partnership with people with cognitive disability was missing. The main shortcomings were the limited extent to which the evaluation design, governance structures, and dissemination directly involved people with cognitive disabilities. As a result, opportunities for generating different types of knowledge by and for people with cognitive disability living in these circumstances were likely to have been missed. Ethically including people with cognitive disability in evaluation projects also raises questions about the development of capacity, if people are engaged in multilayered decision-making in evaluations. Without a strong political and social movement of people with cognitive disability, it is difficult for evaluators and for people with disability (and their allies) who are interested in programs and evaluation to influence the way in which social support evaluation is conducted (Bigby & Frawley, 2010; Walmsley & Johnson, 2003). Given the constraints of competitive tender processes in commissioned evaluations, terms of reference, government policy sensitivities, and tight timelines, tokenism is a clear risk. When academic researchers and 504 Robinson et al. coresearchers with disability conduct inclusive commissioned evaluation, practice and outcomes may change, but there is also a risk that the focus will narrow to government prioritised questions. In contrast, inclusive research can be an opportunity to address broader problems of social policy, social justice, or human rights (Finkelstein, 1999, in Walmsley & Johnson, 2003). Considering the depth of participation of people with cognitive disability in evaluation raises other capacity questions. How do people get the information they need to make informed decisions about becoming involved in evaluations, either as coevaluators or as participants? This is particularly important for people who are engaged with the service system, who may need to consider potential risks from involvement in evaluations that are critical of services on which they rely for support. How can they contribute to the wider discussions on centrally important issues such as power relations between evaluators with and without disability; decision making about which knowledge is valued from their perspective, and important in collecting and analysing evaluation data; and benefit and reward for coevaluators (including career paths and tenuous employment contracts)? Implications for Policy and Practice The application of the framework to this case study was a posthoc reflection on a completed evaluation. Applying the framework to develop an inclusive evaluation or during implementation would likely find different outcomes—particularly where policy makers are amenable to inclusive approaches. We anticipate the framework would also be useful for analysing the inclusivity of research as well as evaluation, and suggest that further research that continued to refine the framework would contribute to a critical understanding of inclusive evaluation and research. The analysis found that incorporating participatory methods in a formative evaluation design contributed to dimensions of inclusive evaluation, such as diversity and depth of participation, power relations, and manageability. However, relying on other people to represent the interests of people with cognitive disability in the governance, data collection, and dissemination compromised the control dimension of inclusion. The implications are that resources and commitment to build the capacity of people with cognitive disability as advisers and direct participants in evaluation processes are required to make inclusive evaluation of disability programs feasible and an expectation of programs, evaluation, and disability communities. At one level the capacity of people with cognitive disability to contribute as coevaluators needs to be supported. The transformative evaluation paradigm calls for control by stakeholders—in this case, people with cognitive disability. Capacity building requires investment to support people with cognitive disability to develop expertise in evaluative skills around design, advice on language, policy issues, and human rights. At another level the receptivity of professionals, evaluators, and commissioners of evaluations—usually government—needs to be developed so they are persuaded of the utility of inclusive evaluation for program and program Australian Social Work 505 improvement, and addressing social inequalities. This requires acknowledging the implications of a shift in control and power relationships, and commitments to resources, time, and transparency to wider audiences before, during, and after the formal evaluation process. It also has implications for evaluators and social work professionals to be more creative in our approach—to find methods, strategies, and tools that are inviting and creative, and that engage people who do not participate in standard ways. Locating people with cognitive disability who are interested in evaluation and social support programs is of course the beginning point so that developing individual skills in inclusive team-based evaluation can continue, consistent with philosophical, ethical, and methodological guidelines and developing skills as analysts and advisors. At the same time, social work professionals need to demonstrate to policy makers and decision makers that inclusive approaches can offer a fruitful avenue to evaluate disability programs. This article contributes to further developing robust evaluation frameworks that support and sustain the meaningful engagement t of people with cognitive disability in evaluation of programs that affect their lives.
A Practitioner-Friendly Empirical Way
to Evaluate Practice
Allen Rubin and Kirk von Sternberg Social work practitioners and the agencies that employ them have long been concernedwith how best to evaluate whether the interventions that they adopt are being provided appropriately or with desired outcomes. The realities of practice in everyday service provision settings, however, make it difficult to use well controlled research designs for evaluation purposes in such settings—especially designs involving the use of control groups. The purpose of this article is to provide practitioners in those settings with a new, feasible way to evaluate practice and yield approximate empirical findings that can inform practice decisions despite the absence of a control group. The key feature of this new approach involves the use of within-group effect size benchmarks. KEY WORDS:benchmarks; evidence-based practice; practice evaluation; within-group effect sizes Various obstacles limit the feasibility of conducting adequately rigorous experimental or quasi-experimental outcome studies in practice settings. In those settings the priority is on service, not research. The requirement of wellcontrolled, methodologically rigorous outcome studies can be perceived as taking time and resources away from service provision, and the notion of withholding or delaying services to clients in need for the purpose of having a control group can be perceived as unethical. Also, social work agencies, which commonly struggle with insufficient funding, might not have the research expertise to conduct such studies or be able to afford securing that expertise. Moreover, the potential rewards to the agency for approving of or implementing such studies might not be readily apparent.
BACKGROUND
The advent of the evidence-based practice (EBP) movement has brought pressure on service providers to deliver interventions that have strong research support—with managed care organizations and other funders insisting that the interventions they support be what they call “evidence based.” Concurrent with the EBP movement, systematic reviews and meta-analyses that synthesize the many randomized controlled trials (RCTs) emerged providing strong research support for the effectiveness of various interventions. This new line of research, coupled with the growth of the Internet, increased the ease with which practitioners could access the literature providing strong research support.
As various practice settings began to implement these “evidence-based” interventions, however, some studies showed that when interventions found to be effective when delivered to the treatment groups in the RCTs were delivered by practitioners in everyday, non-research practice settings, the outcomes were disappointing (Embry & Biglan, 2008; Weisz, Ugueto, Cheron, & Herren, 2013). Reasons postulated for those disappointing findings involved the “disparity between the relatively ideal service
provision conditions that prevail in RCTs and the more problematic service provision characteristics that typically prevail in everyday practice settings”
(Rubin, Parrish, & Washburn, 2016, p. 1). Such postulated differences include the following: (a) the exclusion of clients with comorbid diagnoses in many RCTs, (b) better training and supervision in RCTs for providing the tested intervention, (c)larger and more diverse caseloads in everyday practice settings, (d) more client attendance issues in everyday practice settings, (e) more adherence to treatment manuals and less practitioner turnover in RCTs, and (f ) better funding for service provision
in RCTs (Briere & Scott, 2013). In light of these issues, service providers—as well as their funding sources—cannot be certain that they are being effective just because they are providing an intervention that has strong RCT support for its effectiveness. Such support is no guarantee that practitioners are implementing the intervention
Downloaded from https://academic.oup.com/sw/article-abstract/62/4/297/4091152/A-Practitioner-Friendly-Empirical-Way-to-Evaluate doi: 10.1093/sw/swx037 © 2017 National Association of Social Workers 297 by Adam Ellsworth, Adam Ellsworth on 23 September 2017 with sufficient fidelity or that it will be as effective with their clients as it was with the clients who participated in the RCTs. Also, in light of the disparities mentioned earlier, some have recommended the need to modify research-supported interventions to make them more compatible with the service provision conditions and client characteristics in particular practice settings (Galinsky, Fraser, Day, & Richman, 2013; Sundell, Ferrer-Wreder, & Fraser, 2012). Regardless of whether practitioners are trying to provide a research-supported intervention with maximum fidelity or modifying it to adapt it to their
clients or setting, they cannot be sure that what they are providing is as effective—or even nearly as effective—as the intervention supported in the RCTs.
In fact, they cannot be sure that their intervention is effective at all. Consequently, it is important that they empirically evaluate outcome among clients who received the intervention. However, the feasibility constraints previously mentioned might bar using control groups for such evaluations.
BENCHMARKING
A new approach has emerged that can enable practitioners (and agencies) to evaluate practice outcome feasibly without the use of control groups. This new approach involves calculating a within-group effect size based on pretest to posttest change among service recipients and comparing that effect size to benchmarks that aggregate the within-group effect sizes derived from published RCTs. Effect sizes are statistics that reflect the magnitude of an intervention’s impact in a way that enables outcomes to be compared across evaluations that measure outcome in different ways. For example, if one evaluation finds that intervention A doubles the number of positive parenting behaviors observed from an average of five to 10 and intervention B improves the score on a self-report scale measuring parenting attitudes from 40 to 80, the effect size statistic provides a way to ascertain which of the two degrees of improvement is greater. Merely interpreting an improvement of 40 to be more impactful than an improvement of 5 would not make sense because
the nature of the measures used is so different. The way the effect size makes such outcomes comparable is by dividing each average difference by the standard deviation (SD) in each set of outcome data. The SD statistic depicts the amount of dispersion away from the mean in each data set. The effect sizes reported in the results of RCTs are between-group effect sizes because they divide the difference in mean outcome between the treatment group and the control group by the pooled SD of the two groups. Thus, between-group effect sizes depict how many SDs a supported intervention’s mean outcome is better than a control group’s mean outcome. For example, if the mean
Beck Depression Inventory (BDI) posttest score of the treatment group is 15, the corresponding control group mean is 20, and the pooled SD is 5, then the between-group effect size (known as Cohen’s d) would be 20 minus 15 divided by 5, which equals
1. That finding would indicate that the experimental group’s mean is one SD better than the control group’s mean (better because the lower the BDI score, the less severe the depression). In addition to reporting between-group effect sizes, most RCTs report means and SDs separately for treatment and control groups. From those data,
within-group effect sizes can be calculated. Withingroup effect sizes look at the pretest and the posttest mean separately for each group and divide the difference by the pretest SD of that group. For example, if the treatment group’s mean BDI score improves from 20 to 15, and its pretest SD is 5, its within-group effect size would be 1.0. And if the control group’s mean BDI score improves slightly from 20 to 18, and its pretest SD is 5, its withingroup effect size would be 2 divided by 5, equaling 0.40. Four benchmarking studies have been published that report average within-group effect sizes based on the data in the tables of the published RCTs supporting various interventions (Rubin et al., 2016; Rubin & Yu, 2015; Washburn & Rubin, 2016). Three of the studies have provided tables of within-group effect size benchmarks on researchsupported interventions for adults, including cognitive–behavioral therapy (CBT) for depression (Rubin & Yu, 2015); problem-solving therapy (PST) for depression (Rubin & Yu, 2015); and
trauma-focused interventions, such as prolonged exposure therapy (PET), cognitive processing therapy (CPT), and eye movement desensitization and
reprocessing (EMDR) (Rubin et al., 2016). Practitioners and agencies providing one of the interventions for which within-group effect size
benchmarks are reported, but are unable to use a control group to evaluate their effectiveness, can calculate the within-group effect size of their treated
Downloaded from https://academic.oup.com/sw/article-abstract/62/4/297/4091152/A-Practitioner-Friendly-Empirical-Way-to-Evaluate 298 Social Work Volume 62, Number 4 October 2017 by Adam Ellsworth, Adam Ellsworth on 23 September 2017 clients and compare it with the corresponding published benchmark. For example, if they provide PST to a number of clients experiencing depression and calculate a within-group effect size of, say, 0.80, and then compare that result with the 0.91 benchmark for CBT for depression reported by Rubin & Yu (2015), they will see that their within-group effect size is much closer to the 0.91 aggregate within-group effect size of the treated clients in the RCTs than it is to the 0.32 aggregate within-group effect size of the RCT wait-list control groups. Such a comparison would support the notion that the intervention was effective. Although they have not used their own control group, it does not seem reasonable to suppose that had they done so their control group’s degree of improvement would have been much greater than the average degree of improvement of the control groups in the various RCTs. This would be particularly so if there was relatively little dispersion away from the mean in those RCTs. Although conducting a rigorous controlled outcome study may be the ideal way to evaluate treatment effectiveness, if doing so is not feasible (which is usually the case in non-research service provision settings), this benchmarking approach provides a reasonable alternative. Although concerns about whether an intervention is being implemented appropriately are more typically addressed in research studies through a separate assessment of intervention fidelity—such as
by having experts in the intervention rate videotapes of treatment sessions—it seems reasonable to suppose that if practitioners are achieving outcomes that approximate the outcomes achieved in the RCTs, they are probably implementing the intervention adequately. It bears noting that within-group effect sizes tend to be larger than between-group effect sizes because wait-list control group clients on average tend to change over time due to factors such as contemporaneous events (referred to as history in research methods texts), the passage of time, regression to the mean, and so on (Rubin & Babbie, 2017). Thus, with between-group effect sizes some of the treatment group improvement is canceled out by the control group improvement. For example, between-group effect sizes that approximate 0.50 are commonly found for effective interventions (Rubin & Babbie, 2017). In contrast, the aggregate within-group effect sizes in the published benchmarking studies tend to be above 1.0 for the treated
clients. For untreated wait-list control groups, the within-group effect sizes for depression in the three aforementioned benchmarking studies, as displayed in Table 1, are 0.38 for CBT; 0.32 for PST; and 0.24 for PET, CPT, and EMDR. Practice settings that provide one of those research-supported interventions can compare their mean within-group effect size with the benchmarks reported in the benchmarking study that corresponds to the intervention they provided. They could see whether their effect size for treated clients comes much closer to the treatment group effect size than to that of the control group. The aggregate within-group effect sizes for the RCT treatment groups are displayed in Table 2. But what if the practice setting provides a different intervention when treating depression symptoms? The data in Table 1 reflect how much improvement control groups tend to make on average for depression symptoms. Traditionally, without a control group we cannot infer that an intervention is the cause of improvement from pretest to posttest
because we cannot rule out such alternate explanations as history, regression to the mean, or the passage of time as the real cause. However, control Table 1: Control Group Aggregate Within-Group Effect Size Estimates Regarding Improvement in SelfReported Depression Symptoms, by Type of Treatment Evaluated Statistic
Type of Treatment CBT PST PET, CPT, EMDR Aggregate effect size 0.38 0.32 0.24 Standard error 0.02 0.03 0.35 Number of studies 61 7 9
Notes: CBT = cognitive–behavioral therapy, PST = problem-solving therapy, PET = prolonged exposure therapy, CPT = cognitive processing therapy, EMDR = eye movement
desensitization and reprocessing.
Table 2: Treatment Group Aggregate
Within-Group Effect Size Estimates
Regarding Improvement in SelfReported Depression Symptoms, by
Type of Treatment Evaluated
Statistic
Type of Treatment
CBT PST PET, CPT, EMDR
Aggregate effect size 1.19 0.91 1.19
Standard error 0.03 0.03 0.35
Number of studies 76 7 9
Notes: CBT = cognitive–behavioral therapy, PST = problem-solving therapy, PET = prolonged exposure therapy, CPT = cognitive processing therapy, EMDR = eye movement
desensitization and reprocessing.
Downloaded from https://academic.oup.com/sw/article-abstract/62/4/297/4091152/A-Practitioner-Friendly-Empirical-Way-to-Evaluate Rubin and von Sternberg / A Practitioner-Friendly Empirical Way to Evaluate Practice 299 by Adam Ellsworth, Adam Ellsworth on 23 September 2017
group benchmarks suggest how much improvement tends to occur on average due to those alternate explanations. It stands to reason that if a practice setting were providing an intervention in which depressive symptoms were an outcome variable, and if the setting could have a control group, that control group’s degree of improvement probably would not be markedly better than the highest figure in Table 1, which is 0.38. Thus, even if the intervention
were not yet considered to have strong research support—that is, if it were different than any of the interventions in the benchmarking studies—obtaining a within-group effect size much greater than 0.38 would lend meaningful empirical support consistent
with the notion that the intervention was effective. The support for such a supposition would be further enhanced if the practice setting’s within-group effect size for its treated clients approximated the benchmarks for the treated clients in the RCTs, as displayed in Table 2, which are slightly above or
slightly below 1.0. Granted, in an ideal world all practice settings would conduct rigorous outcome studies involving control groups. This benchmarking approach, therefore, is not proposed as a better alternative to conducting controlled outcome studies; instead, it is proposed as a useful new tool that gives better empirical evidence than has heretofore been available in practice settings regarding the plausibility of the notion that clinicians or agencies are being effective.
SIMPLIFYING THE CALCULATIONS
Despite the likely feasibility of the benchmarking approach, practitioners in non-research settings understandably might be daunted by the need to calculate means and SDs. One way to surmount that obstacle would be to obtain the assistance of someone with statistical expertise—perhaps a nearby faculty member interested in building links to the practice community. If that is not feasible, practitioners can use an online calculator (online calculators can be found at https://www.easycalculation .com/statistics/standard-deviation.php; http://www .miniwebtool.com/standard-deviation-calculator/ and http://www.alcula.com/calculators/statistics/ standard-deviation/) to calculate the means and SDs. All they need to do is enter the pretest scores in the box provided at the calculator Web site and then click on the prompt to calculate. The mean and SD will appear instantly. After getting those two statistics for the pretest scores, they would enter the posttest scores and click to get the posttest mean. (The posttest SD is not relevant for the within-group effect size calculation.) Next, they would divide the difference between the pretest and posttest means by the pretest SD. For example, if at pretest the mean BDI score were 30 with a SD of 10, and at posttest the mean BDI score
were 18, the within-group effect size would be 30 minus 18 divided by 10, or 1.2. That would compare quite favorably with the aggregate withingroup effect sizes of the treatment groups displayed in the published benchmark articles. However, if the posttest mean were 26, the within-group effect size would be 30 minus 26 divided by 10, or 0.40—
which is nearer to the control group benchmarks in those articles—suggesting the need perhaps to provide a different intervention or to improve the way they were providing the current one. In light of the studies supporting the impact of relationship factors on practice effectiveness (Nathan, 2004; Wampold, 2015), for example, they might not want to jump too hastily to the conclusion that the intervention needs to be replaced. Perhaps they just need to improve the practitioner–client relationship when providing it. The benchmarking approach probably will not indicate why practitioners are not being more effective, but it will provide useful empirical data indicating the need to consider ways to be more effective. The steps for simplifying the calculation of within-groups effect size benchmarks are summarized in Figure 1.
To be more precise, the practice setting withingroup effect size should be adjusted for small sample sizes. That adjustment—which was done in the benchmarking studies—involves calculating Hedge’s g, using a formula in which the effect size is multiplied by 1 minus a fraction in which 3 is the numerator and (4N) minus 9 is the denominator (Wilson, 2011). If the practice setting is unable to obtain statistical assistance, however, it might be advisable to skip this step—for three reasons. First, the adjustment typically does not make a difference sizable enough to change from a favorable to an unfavorable comparison with the published benchmarks. For example, a within-group effect size of 1.0 based on 20 cases would become 0.96 after the adjustment. Second, the comparison is an approximate exercise, only, and it is better to have approximate empirical data to inform practice decisions than none at all. Third, being required to perform statistical procedures that Downloaded from https://academic.oup.com/sw/article-abstract/62/4/297/4091152/A-Practitioner-Friendly-Empirical-Way-to-Evaluate 300 Social Work Volume 62, Number 4 October 2017 by Adam Ellsworth, Adam Ellsworth on 23 September 2017 they find daunting might result in refusal to engage in this process at all.
CONCLUSION AND CAVEATS
This article has presented the historical background, rationale, and procedure for calculating approximate within-group effect sizes among service recipients and comparing those effect sizes to published within-group effect size benchmarks based on the treatment groups and wait-list control groups in RCTs that have provided strong research
support for certain interventions. It has shown how practitioners in practice settings can feasibly evaluate their practice effectiveness in settings that will not or cannot have a control group. In this article the displayed control group benchmarks from the published benchmarking studies all
were of the wait-list variety. Those studies also display benchmarks for treatment-as-usual (TAU) control groups, but those benchmarks were not displayed in
this article based on the notion that having TAU control groups in practice settings would be even less feasible than having wait-list control groups.
For illustrative purposes, the benchmarks displayed in this article all involve depression symptoms because those symptoms comprise the only outcome variable that was reported by all of the extant benchmarking studies that reported the data analyzed for this article. The limited degree of dispersion in the control group benchmarks suggests
that even if a practitioner were treating depression with an intervention other than one of those reported in this article, those control group benchmarks would still be applicable. However, caution is warranted regarding the use of depression benchmarks when treating other disorders. Despite the degree of consistency among the benchmarks in Table 1, for example, conceivably the benchmarks might be quite different among wait-list participants
with problems other than depression. One implication for future research, therefore, is to conduct benchmarking studies for other target problems and other interventions that RCTs have found to be effective in treating other problems. In the meantime, practitioners should be encouraged to make comparisons with the data in the benchmarking study that best matches the target problems and interventions in their practice. Another caveat involves the use of the pretest SD
instead of the pooled SD as the denominator when calculating within-group effect sizes. The pooled SD is the preferred denominator when calculating the between-group effect sizes and is typically used in RCT data analyses. However, published RCTs Figure 1: Steps in Calculating a Within-Group Effect Size Regarding Improvement in Scores on the Beck Depression Inventory (BDI) Step 1. Collect pretest and posttest scores on a measure of outcome, such as the BDI. Step 2. Enter the pretest scores in the box at one of the online calculators. Step 3. Suppose the pretest BDI scores for a sample of five clients were as follows: 36, 32, 28, 24, 20. The mean would be 28, and the standard deviation would be 6.32. Step 4. Enter the posttest scores in the box at one of the online calculators.
Step 5. Suppose the posttest BDI scores for those five clients were as follows: 26, 24, 22, 20, 18. The mean would be 22.
Step 6. Subtract 22 (the posttest mean) from 28 (the pretest mean) and divide by 6.32 (the pretest
standard deviation). Six divided by 6.32 equals 0.95 (the within-group effect size).
Step 7. Compare the adjusted within-group effect size of 0.95 to the benchmarks in Tables 1 and 2.
Because the within-group effect size approximates the treatment group effect sizes in Table 2 and
is several times greater than the control group effect sizes in Table 1, it would be reasonable to
suggest that the intervention was being provided in a satisfactory manner and would be consistent
with the notion that the intervention was effective.
Downloaded from https://academic.oup.com/sw/article-abstract/62/4/297/4091152/A-Practitioner-Friendly-Empirical-Way-to-Evaluate Rubin and von Sternberg / A Practitioner-Friendly Empirical Way to Evaluate Practice 301
by Adam Ellsworth, Adam Ellsworth
on 23 September 2017
typically do not report the pooled SD within each
group; therefore, it is not possible to use it when calculating within-group effect sizes based on the data reported in published RCTs. Consequently, authors of
the benchmarking studies had to use the separate pretest SDs in their calculations of the treatment group within-group effect size benchmarks and the control group within-group effect size benchmarks. To be consistent, this article has recommended using the same approach when calculating the practice setting within-group effect size. Doing so is akin to Glass’s (1976) original delta effect size approach, in which the control group SD comprised the denominator. A second statistical caveat involves the assumption of a normal distribution when calculating effect sizes. The cases in practice settings may not be distributed normally; however, this assumption typically is not addressed when between-group effect sizes are reported in published RCTs. Given the “ballpark estimate” nature of the method being proposed in this article—and the emphasis on making it practitioner friendly—it would seem unreasonable to require practice settings to have normally distributed data. (An alternative would be to have them trim their scores by taking 20 percent off the top and the bottom, as suggested by Algina, Keselman, and Penfeld [2005].) Finally, there is the issue of treatment dropouts and
whether to conduct an intent-to-treat (ITT) analysis. Treatment dropouts are inherent in direct practice settings. The published RCTs from which the benchmarking studies gathered their data commonly report completer analyses as well as ITT analyses. The benchmarking studies reported both types of benchmarks and found little difference between the benchmarks based on treatment completers only and those based on ITT analyses in which dropouts were
incorporated into the analysis. Although ITT analyses are more conservative depictions of treatment effectiveness, it might be unrealistic—not to mention less practitioner-friendly—to ask (non-research) practice settings to conduct them. However, if practitioners would prefer to do so, they could make their comparisons with the ITT data in the published benchmarking studies. SW
REFERENCES
Algina, J., Keselman, H. J., & Penfeld, R. D. (2005). An
alternative to Cohen’s standardized mean difference effect size: A robust parameter and confidence interval
in the two independent groups case. Psychological Methods, 10, 317–328.
Briere, J. N., & Scott, C. (2013). Principles of trauma therapy:
A guide to symptoms, evaluation, and treatment (2nd ed.).
Thousand Oaks, CA: Sage Publications.
Embry, D. D., & Biglan, A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clinical Child and Family Psychology Review, 11, 75–113.
Galinsky, M., Fraser, M. W., Day, S. H., & Richman, J. M. (2013). A primer for the design of practice manuals: Four stages of development. Research on Social Work
Practice, 23, 219–228. Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5, 3–8.
Nathan, P. E. (2004). The clinical utility of therapy research: Bridging the gap between present and future.
In A. R. Roberts & K. Yeager (Eds.), Evidence-based practice manual: Research and outcome measures in health and human services (pp. 949–960). New York: Oxford University Press.
Rubin, A., & Babbie, E. (2017). Research methods for social work (9th ed.). Boston: Cengage.
Rubin, A., Parrish, D. E., & Washburn, M. (2016). Outcome benchmarks for adaptations of researchsupported treatments for adult traumatic stress.
Research on Social Work Practice, 26(3), 243–259.
Rubin, A., & Yu, M. (2015). Within-group effect-size benchmarks for problem-solving therapy for depression in adults. Research on Social Work Practice. Advance
access publication. Retrieved from http://journals .sagepub.com/doi/abs/10.1177/1049731515592477
Sundell, K., Ferrer-Wreder, L. R., & Fraser, M. W. (2012). Going global: A model for evaluating empirically supported family-based interventions in new contexts. Evaluation & the Health Professions, 37, 203–230.
Wampold, B. (2015). How important are the common factors in psychotherapy? An update. World Psychiatry,
14, 270–277.
Washburn, M., & Rubin, A. (2016). Within-group effectsize benchmarks for outpatient dialectical behavioral therapy in treating adults with borderline personality disorder. Research on Social Work Practice. Advance access publication. Retrieved from http://journals .sagepub.com/doi/abs/10.1177/1049731516659363
Weisz, J. R., Ugueto, A. M., Cheron, D. M., & Herren, J. (2013). Evidence-based youth psychotherapy in the mental health ecosystem. Journal of Clinical Child and
Adolescent Psychology, 42, 274–286. Wilson, D. B. (2011). Calculating effect-sizes. Retrieved from http://www.campbellcollaboration.org/artman2/ uploads/1/2_D_Wilson__Calculating_ES.pdf
Allen Rubin, PhD, is professor, Graduate College of
Social Work, University of Houston, 110HA Social Work
Building-Room 342, Houston, TX 77024-4013; e-mail: [email protected]. Kirk von Sternberg, PhD, is associate professor, School of Social Work, University of
Texas at Austin.
Original manuscript received January 8, 2017
Final revision received February 7, 2017
Editorial decision February 13, 2017
Accepted February 13, 2017
Advance Access Publication August 22, 2017
Criteria Description
Literature Review
5. Target
20 points
Literature Review is in-depth and insightful. All sources are authorities and current.
4. Acceptable
17.4 points
Literature Review is well stated but some nuances may have been overlooked. Most sources used are authorities and current.
3. Approaching
15.8 points
Literature Review is informative but lacks significant insights. Sources used are credible.
2. Insufficient
14.8 points
Literature Review is lacking detail and clarity. Some sources have questionable credibility.
1. Unsatisfactory
0 points
Literature Review is not included.
Statement of the Issue
20 points
Criteria Description
Statement of the Issue
5. Target
20 points
Statement of the issue is comprehensive and contains the essence of the assignment.
4. Acceptable
17.4 points
Statement of the issue is descriptive and reflective of the arguments and appropriate to the purpose.
3. Approaching
15.8 points
Statement of the issue is apparent and appropriate to purpose.
2. Insufficient
14.8 points
Statement of the issue is insufficiently developed or vague. Purpose is not clear.
1. Unsatisfactory
0 points
Statement of the issue is missing.
Mechanics of Writing
5 points
Criteria Description
Includes spelling, capitalization, punctuation, grammar, language use, sentence structure, etc.
5. Target
5 points
No mechanical errors are present. Skilled control of language choice and sentence structure are used throughout.
4. Acceptable
4.35 points
Few mechanical errors are present. Suitable language choice and sentence structure are used.
3. Approaching
3.95 points
Occasional mechanical errors are present. Language choice is generally appropriate. Varied sentence structure is attempted.
2. Insufficient
3.7 points
Frequent and repetitive mechanical errors are present. Inconsistencies in language choice or sentence structure are recurrent.
1. Unsatisfactory
0 points
Errors in grammar or syntax are pervasive and impede meaning. Incorrect language choice or sentence structure errors are found throughout.
Format/Documentation
5 points
Criteria Description
Uses appropriate style, such as APA, MLA, etc., for college, subject, and level; documents sources using citations, footnotes, references, bibliography, etc., appropriate to assignment and discipline.
5. Target
5 points
No errors in formatting or documentation are present. Selectivity in the use of direct quotations and synthesis of sources is demonstrated.
4. Acceptable
4.35 points
Appropriate format and documentation are used with only minor errors.
3. Approaching
3.95 points
Appropriate format and documentation are used, although there are some obvious errors.
2. Insufficient
3.7 points
Appropriate format is attempted, but some elements are missing. Frequent errors in documentation of sources are evident.
1. Unsatisfactory
0 points
Appropriate format is not used. No documentation of sources is provided.
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.