Evaluating Effectiveness of a Volunteer Program Hypothetically, you have now been working as the Volunteer Administrator for Difference Today Nonpro
Evaluating Effectiveness of a Volunteer Program
Hypothetically, you have now been working as the Volunteer Administrator for Difference Today Nonprofit for 24 months. This is an organization that was struggling with recruiting, retaining, and coaching volunteers, and your proposed program was implemented to help curtail these problems. The program is now in need of an evaluation to measure its effectiveness.
- Refer to the Four Fundamental Questions (Connors, 2012).
- State and answer the four questions in your initial posting by Day 3.
Respond to two other students by challenging one of their four answers by Day 7.
Connors, T. D. (2011). Wiley nonprofit law, finance and management series: volunteer management handbook: leadership strategies for success (Links to an external site.) (2nd ed.). Hoboken, NJ: John Wiley & Sons. ISBN-13: 9780470604533.
CHAPTER 16
Evaluating Impact of Volunteer Programs
R. Dale Safrit, EdD North Carolina State University
This chapter introduces and defines the closely related concepts of evaluation, im-pact and accountability, especially as applied to volunteer programs. The author dis- cusses four fundamental questions that guide the development and implementation of an impact evaluation and subsequent accountability of a volunteer program.
Evaluation in Volunteer Programs
The concept of evaluation as applied to volunteer programs is not new. As early as 1968, Creech suggested a set of criteria for evaluating a volunteer program and con- cluded, “Evaluation, then, includes listening to our critics, to the people around us, to experts, to scientists, to volunteers so that we may get the whole truth [about our pro- grams]” (p.2). This approach to evaluation was well ahead of its time since up until the past decade, when authors within our profession either only addressed the evaluation of holistic volunteer programs superficially (e.g., Brudney, 1999; Naylor, 1976; O’Con- nell, 1976; Stenzel & Feeney, 1968; Wilson, 1979) or not at all (e.g., Naylor, 1973; Wil- son, 1981). Even in the first edition of this text, fewer than four total pages of text were dedicated to the topic of evaluation within chapters dedicated to other traditional vol- unteer program management topics, including recruiting and retaining volunteers (Bradner, 1995), training volunteers (Lulewicz, 1995), supervising volunteers (Brud- ney, 1995; Stepputat, 1995), improving paid staff and volunteer relations (Macduff, 1995), monitoring the operations of employee volunteer programs (Seel, 1995), in- volving board members (Graff, 1995), and determining a volunteer program’s success (Stepputat, 1995).
However, for volunteer programs operating in contemporary society, evaluation is a critical, if not the most critical, component of managing an overall volunteer program and subsequently documenting the impacts and ultimate value of the program to the
389 Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
target clientele it is designed to serve as well as the larger society in which it operates. As early as 1982, Austin et al. concluded that “Only through evaluation can [nonprofit] agencies make their programs credible to funding agencies and government authori- ties” (p. 10). In 1994, Korngold and Voudouris suggested the evaluation of impact on the larger community as one phase of evaluating an employee volunteer program.
The critical role of volunteer program impact evaluation in holistic volunteer man- agement became very apparent during the final decade of the twentieth century, and continues today (Council for Certification in Volunteer Administration, 2008; Merrill & Safrit, 2000; Safrit & Schmiesing, 2005; Safrit, Schmiesing, Gliem, & Gliem, 2005). While most volunteer managers understand and believe in evaluation, they most often have focused their efforts on evaluating the performance of individual volunteers and their contributions to the total program and/or organization. In this sense, evaluation has served an important managerial function in human resource development, the results of which are usually known only to the volunteer and volunteer manager. As Morley, Vinson, and Hatry (2001) noted:
Nonprofit organizations are more often familiar with monitoring and reporting such information as: the number of clients served; the quantity of services, programs, or activities provided; the number of volunteers or volunteer hours contributed; and the amount of donations received. These are important data, but they do not help nonprofit managers or constituents understand how well they are helping their clients. (p. 5)
However, as nonprofit organizations began to face simultaneous situations of stagnant or decreasing public funding and increasing demand for stronger account- ability of how limited funds were being used, volunteer program impact evaluation moved from a human resource management context to an organizational develop- ment and survival context. The volunteer administration profession began to recog- nize the shifting attitudes toward evaluation, and in the early 1980’s the former Association for Volunteer Administration (AVA) defined a new competency funda- mental to the profession as “the ability to monitor and evaluate total program results . . . [and] demonstrate the ability to document program results” (as cited in Fisher & Cole, 1993, pp. 187, 188). Administrators and managers of volunteer-based programs were increasingly called on to measure, document, and dollarize the impact of their programs on clientele served and not just the performance of individual volunteers and the activities they contribute (Safrit & Schmiesing, 2002; Safrit, Schmiesing, King, Villard, & Wells, 2003; Schmiesing & Safrit, 2007). This intensive demand for greater accountability initially arose from program funders (public and private) but quickly escalated to include government, the taxpaying public, and even the volunteers them- selves. As early as 1993, Taylor and Sumariwalla noted:
Increasing competition for tax as well as contributed dollars and scarce resources prompt donors and funders to ask once again: What good did the do- nation produce? What difference did the foundation grant or United Way alloca- tion make in the lives of those affected by the service funded? (p. 95)
According to Safrit (2010, p. 316), “The pressure on nonprofit organizations to evaluate the impact of volunteer-based programs has not abated during the first
390 Evaluating Impact of Volunteer Programs
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
decade of the new [21st] century, and if anything has grown stronger.” With regards to overall volunteer management, evaluation continues to play an important role in the human resource management of individual volunteers; most volunteer managers are very familiar and comfortable with this aspect of evaluation in volunteer programs. However, today’s volunteer managers are less knowledgeable, skilled, and comfort- able with the concept of impact evaluation as only the first (if important) step in meas- uring, documenting, and communicating the effects of a volunteer program immediately on the target clientele served by the organization’s volunteers, and ulti- mately on the surrounding community.
A Symbiotic Relationship: Evaluation, Impact, and Accountability
In the overwhelming majority of both nonformal workshops and formal courses I have taught, participants will inevitably use three terms almost interchangeably in our discussions of evaluating volunteer programs. The three concepts are symbi- otically linked and synergistically critical to contemporary volunteer programs, yet they are not synonymous. The three terms are evaluation, impact, and accountability.
Evaluation
Very simply stated, evaluation means measurement. We “evaluate” in all aspects of our daily lives, whether it involves measuring (evaluating) the outside temperature to determine if we need to wear a coat to work, measuring (evaluating) the current bal- ance in our checking account to see if we can afford to buy a new piece of technol- ogy, or measuring (evaluating) the fiscal climate in our workplace to decide if it is a good time to ask our supervisor for a salary increase. However, for volunteer pro- grams, “evaluation involves measuring a targeted program’s inputs, processes, and outcomes so as to assess the program’s efficiency of operations and/or effectiveness in impacting the program’s targeted clientele group” (Safrit, 2010, p. 318).
The duel focus of this definition on a volunteer program’s efficiency and effective- ness is supported by contemporary evaluation literature. Daponte (2008) defined evaluation as being “done to examine whether a program or policy causes a change; assists with continuous programmatic improvement and introspection” (p. 157). Royse, Thyer, and Padgett (2010) focused on evaluation as “a form of appraisal…that examines the processes or outcomes of an organization that exists to fulfill some so- cial need” (p. 12). These definitions each recognize the important role of evaluation in monitoring the operational aspects of a volunteer program (i.e., inputs and processes) yet ultimately emphasize the program’s ultimate purpose of engaging volunteers to help bring about positive changes in the lives of the program’s targeted audience (i.e., outcomes). These positive changes are called impacts.
Impact
Contrary to popular belief, volunteer programs do not exist for the primary purpose of engaging volunteers merely to give the volunteers something to do or for supplying an organization with unpaid staff to help expand its mission and purpose. Rather,
A Symbiotic Relationship: Evaluation, Impact, and Accountability 391
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
volunteer programs ultimately seek to bring about positive impacts in the lives of the targeted clientele the volunteers are trained to support either directly (through direct service to individual clients) or indirectly (through direct service to the service- providing organization). The latter statement does nothing to discount or demean the critical involvement of volunteers, but instead challenges a volunteer manager to con- tinually focus and refocus the engagement of volunteers on the ultimate mission of the sponsoring organization and the outcomes it seeks to bring about. In other words, it forces volunteer managers to identify and focus on the volunteer program’s desired impacts.
According to Safrit (2010):
Impact may be considered the ultimate effects and changes that a volunteer- based program has brought about upon those involved with the program (i.e., its stakeholders), including the program’s targeted clientele and their surrounding neighborhoods and communities, as well as the volunteer organization itself and its paid and volunteer staff. (p. 319)
This inclusionary definition of impact focuses primarily on the organization’s rai- son-d’être, and secondarily on the organization itself and its volunteers. Thus, it paral- lels and compliments nicely the earlier definition of evaluation as being targeted first toward the volunteer program’s targeted clientele, and secondly on internal processes and operations. Subsequently, volunteer managers must constantly measure the ulti- mate outcomes of volunteer programs, or stated more formally, evaluate the volunteer program’s impacts. However, merely evaluating a volunteer program’s impacts is not in itself a guarantee for the program’s continued success and/or survival; however positive, the knowledge gained by evaluating a volunteer program’s impacts are prac- tically meaningless unless they are strategically communicated to key leaders and de- cision makers connected to the sponsoring organization.
Accountability
Accountability within volunteer programs involves the strategic communication of the most important impacts of a volunteer program, identified through an evaluation pro- cess, to targeted program stakeholders, both internal and external to the organization. Internal stakeholders would include paid staff, organizational administrators, board members, volunteers, and the clientele served; external stakeholders include funders and donors, professional peers, government agencies and other legitimizers, and the larger community in which the organization operates.
Boone (1985) was the first author to describe the critical role of accountability in educational programs and organizations, and the previous definition is based largely on that of Boone, Safrit, and Jones (2002). Unfortunately, volunteer managers are sometimes hesitant to share program impacts even when they have identified them through an effective evaluation; they often consider such strong accountability as be- ing boastful or too aggressive. However, accountability is the third and final concept critically linking the previous concepts of evaluation and impact to a volunteer program’s or organization’s continued survival. Volunteer managers must accept the professional responsibility in our contemporary impact-focused society to
392 Evaluating Impact of Volunteer Programs
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
proactively plan for targeted accountability, identifying specific key stakeholders and deciding what specific program impacts each stakeholder type wants to know. This targeted approach to volunteer program accountability will be discussed in more detail later in this chapter.
Four Fundamental Questions in Any Volunteer Program Impact Evaluation
Evaluation is a relatively young concept within the educational world; Ralph Tyler (1949) is often credited with coining the actual term itself, evaluation, to refer to the alignment between measurement and testing with educational objectives. And there is no dearth in the literature of various approaches and models for program evaluation. Some models are more conceptual and focus on the various processes involved in evaluation (e.g., Fetterman, 1996; Kirkpatrick, 1959; Rossi & Freeman, 1993; Stuffle- beam, 1987) while others are more pragmatic in their focus (e.g., Combs & Faletta, 2000; Holden & Zimmerman, 2009; Patton, 2008). However, for volunteer managers with myriad professional responsibilities in addition to but including volunteer pro- gram evaluation, I suggest the following four fundamental questions that should guide any planned evaluation of a volunteer-based program.
Question 1: Why Do I Need to Evaluate the Volunteer Program?
Not every volunteer program needs to be evaluated. This may at first appear to be a heretical statement coming from the author of a chapter about volunteer program eval- uation, and theoretically it is. Pragmatically, however, it is not. Many volunteer pro- grams are short term by design, or are planned to be implemented one-time only. In contrast, some volunteer programs are inherent with the daily operations of a volun- teer organization, or are so embedded within the organization’s mission that they are invisible to all but organizational staff and administrators. Within these contexts, a vol- unteer manager must decide whether the evaluation of such a program warrants the required expenditure of time and human and materials resources. Furthermore, one cannot (notice that I did not say, may not) evaluate any volunteer program for which there are no measurable program objectives. This aspect of Question 1 brings us again to the previous discussion of volunteer program impacts: What is it that the volunteer program seeks to accomplish within its targeted clientele? What ultimate impact is the volunteers’ engagement designed to facilitate or accomplish?
Any and all volunteer program impact evaluations must be based on the measur- able program objectives targeted to the program’s clientele (Safrit, 2010). Such mea- surable program objectives are much more detailed than the program’s mere goal, and define key aspects of the program’s design, operations, and ultimate outcomes. A measurable program objective must include each of the following five critical elements:
1. What is the specific group or who are the specific individuals that the volunteer program is targeted to serve (i.e., the program’s clientele)?
2. What specific program activities will be used to interact with the targeted clientele group (i.e., the intervention that involves volunteers)?
Four Fundamental Questions in Any Volunteer Program Impact Evaluation 393
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
3. What specific change is the intervention designed to bring about within the tar- geted clientele group (i.e., program outcome or impact)?
4. What level of change or success does the program seek to achieve? 5. How will the intervention’s success be evaluated?
As an example, too often I encounter the volunteer types of volunteer program objectives:
& “We will recruit at least 50 new teen volunteers to help with the new Prevent Youth Obesity Now Program.”
& “At least 100 individuals will participate in the volunteer-delivered Career Funda- mentals Program.”
& “Organizational volunteers will contribute a minimum of 1,000 total volunteer hours mentoring adults who cannot read and/or write.”
Now consider their correctly written measurable program objectives and components:
& “As a result of the teen volunteer staffed Prevent Youth Obesity Now summer day camp, at least 50% of the participating 200 overweight youth will adopt and main- tain at least one new proper nutrition practice, as reported by their parents in a six-month follow-up mailed questionnaire.” (Target audience: 200 overweight youth. Planned intervention: teen volunteer staffed summer day camp. Desired change among target audience: adoption of at least one new proper nutrition practice. Desired level of success: 50% of participating youth. How suc- cess will be evaluated: 6-month post-camp questionnaire mailed to participants’ parents.)
& “At least 50% of currently unemployed participants in the six-week Career Fundamentals Program taught by volunteers will be able to describe one new workplace skill they learned as a result of the program, as measured by a volun- teer-delivered interview during the final Program session.” (Target audience: unemployed individuals. Planned intervention: volunteer taught workshop session. Desired change among target audience: learning new workplace skills. Desired level of success: 50% of participants. How success will be evaluated: exit interview conducted by a volunteer.)
& “At least 30% of the adults participating in the six-week literacy volunteer mentor- ing program will improve their reading skills by ten percentile points as measured by a standardized reading test administered at the first and final sessions.” (Target audience: illiterate adults. Planned intervention: volunteer mentoring program. Desired change among target audience: improved reading skills. Desired level of success: 30% of participants. How success will be evaluated: standardized reading tests.)
A final aspect of Question 1 involves the use of “logic” models in evaluating volunteer programs so called because they seek to outline and follow the logical development and implementation of a program or intervention from its conception through to its targeted long-term impact. Logic models are not new to volunteer
394 Evaluating Impact of Volunteer Programs
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
programs (Honer, 1982; Safrit & Merrill, 1998, 2005) and apply four standard compo- nents to program development and impact evaluation (Bennett & Rockwell, 1994; Frechtling, 2007; W.K. Kellogg Foundation, 2000):
1. Inputs. Actual and in-kind resources and contributions devoted to the project 2. Activities. All activities and events conducted or undertaken so as to achieve the
program’s identified goal 3. Outputs. Immediate, short term services, events, and products that document the
implementation of the project 4. Outcomes. The desired long-term changes achieved as a result of the project
Unfortunately, space does not allow for an in-depth discussion of the use of logic models in evaluating volunteer program impacts. However, Exhibit 16.1 illustrates the application of logic modeling in a volunteer-delivered program designed to decrease overweight and/or obesity among teens. Note the strong correlation between the pro- gram’s measurable program objectives and the Outcomes component for the volun- teer program.
EXHIBIT 16.1 Sample Logic Model for a Volunteer Program Focused on Decreasing Teen Obesity
Inputs Activities Outputs Outcomes
$350 in nutrition curricula purchased
$750 for use of the day camp facility (in-kind)
10 members of the program advisory committee
12 adult volunteers working with the program
Program coordinator devoted 3 workweeks (120 hours) to planning and implementing the program
Three 2-hour meetings conducted of the program advisory committee
Three 3-hour volunteer training sessions conducted
At least 30 teens who are clinically obese will participate in the 3-day, 21-hour program
At least 10 adult volunteers will serve during the actual day camp
Program advisory committee members will volunteer to teach program topics to participants during the day camp
At least 80% of teen participants will increase their knowledge of proper nutrition and/or the importance of exercise along with diet as evaluated using a pre/ post-test survey
At least 70% of teen participants will demonstrate new skills in preparing healthy snacks and meals as evaluated by direct observation by program volunteers
At least 50% of teen participants will aspire to eat more nutritious meals and to exercise daily as indicated by a post-test survey
Source: � 2009 R. Dale Safrit. All Rights Reserved.
Four Fundamental Questions in Any Volunteer Program Impact Evaluation 395
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Question 2: How Will I Collect the Required Impact Evaluation Data?
Once targeted impacts have been identified for a volunteer program, thus answering Question 1 as to why the program is to be evaluated, a volunteer manager must next decide on actual methods to be used to collect the evaluation data. If measurable pro- gram objectives have been developed, then Question 2 is easily answered. However, oftentimes the evaluation component of a measurable program objective is the final one to be decided, simply because the other four components tend to naturally pre- empt it during the conceptual development of a volunteer program evaluation. Fur- thermore, data collection methods may largely be defined and/or constrained based on the type of program intervention and/or the numbers and type of target audience (i.e., data collection methods will naturally differ between one-on-one and mass-audi- ence-delivered volunteer programs, adult and youth audiences, etc.).
Basically, two types of data (and thus data collection methods) exist: qualitative and quantitative. Thomas (2003) provides a very fundamental description of both:
The simplest way to distinguish between qualitative and quantitative may be to say that qualitative methods involve a researcher describing kinds of character- istics of people and events without comparing events in terms of measurements or amounts. Quantitative methods, on the other hand, focus attention on measure- ments and amounts (more and less, larger and smaller, often and seldom, simi- lar and different) of the characteristics displayed by the people and events that the researcher studies. (p. 1)
And, both types of data are important in documenting impact of volunteer pro- grams. According to Safrit (2010):
Within non-academic contexts (including volunteer programs), quantitative methods are most commonly used in program evaluations. Quantitative methods allow the evaluator to describe and compare phenomena and observations in numeric terms. Their predominance may largely be due to the increasing de- mand for “number-based evidence” as accountability within nonprofit programs and organizations. However, qualitative methods may also be used very effec- tively in volunteer program impact evaluations. Qualitative methods focus upon using words to describe evaluation participants’ reactions, beliefs, attitudes, and feelings and are often used to put a “human touch” on impersonal number scores and statistics. (p. 333)
The discussion is not necessarily qualitative-versus-quantitative; rather, a volun- teer manager needs to once again consider critical factors affecting the program’s im- pact evaluation such as the purpose of the evaluation; possible time constraints; human and material (including financial) resources available; to whom the evaluation is targeted; etc.
There is a wide array of qualitative methods available for a volunteer manager to utilize in evaluating impacts of a volunteer program (Bamberger, Rugh, & Mabry, 2006; Dean, 1994; Krueger & Casey, 2000; Miles & Huberman, 1994; Thomas, 2003; Wells, Safrit, Schmiesing, & Villard, 2000), including (but not limited to) case studies,
396 Evaluating Impact of Volunteer Programs
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
ethnographs, content analysis, participant observation, and experienced narratives. Of these, however, Spaulding (2008) suggested that the case study approach using par- ticipant interviews and focus groups to collect data is by far the most common qualita- tive method used with volunteer programs. Again, space limitations do not allow for an in-depth discussion of these methods. (For a more in-depth discussion of using case studies with volunteer programs, see Safrit, 2010.) However, the author suggests that qualitative methods are most appropriate in evaluating volunteer programs that are targeted to a relatively small group of clientele, for whom a few, focused practice or behavioral skills and/or changes are the desired program impact. Qualitative evalu- ation methods required considerably more time and human resources to conduct properly, and data should be collected by well-trained individuals who conduct indi- vidual interviews and/or focus groups. Qualitative methods are most effective when the desired targeted accountability is focused on personal/human interest and affec- tive/emotional impacts of the volunteer program.
However, when volunteer programs are designed to reach large numbers of tar- geted clientele and seek to impact their knowledge and/or attitudes, quantitative methods are probably more appropriate for the volunteer program impact evaluation. Unfortunately, in today’s society demanding increased accountability, volunteer orga- nizations are called on all too often to reach ever-increasing numbers of targeted cli- ents with stagnant or decreasing resources, and then to dollarize the program’s impacts on clients. Quantitative methods are also easier to analyze and summarize, and are best when it is important or necessary to translate measured program impacts into dollar amounts that are required by funders and legitimizers.
Consequently, quantitative evaluation methods are overwhelmingly the most prevalent approach to collecting volunteer program impact data, and the most com- mon quantitative method used are survey designs using questionnaires to collect data. According to Safrit (2010):
Translated into volunteer program terms…conducting a survey to evaluate [vol- unteer] program impact involves: identifying the volunteer program of interest; identifying all program clientele who have participated in the program and se- lecting participants for the evaluation; developing a survey instrument (question- naire) to collect data; collecting the data; and analyzing the data so as to reach conclusions about program impact. (pp. 336–337)
When using surveys to evaluate volunteer program impacts, there are important considerations to be made by the volunteer manager regarding participant selection, instrumentation, and data collection and analysis procedures (Dillman, Smyth, & Christian, 2008). Safrit (2010) provides an in-depth discussion of each consideration that space limitations prohibit in this chapter. However, the prevalence today of per- sonal computers, data analysis software programs designed for non-statisticians, “sur- vey-design-for-dummies” type texts, and very affordable do-it-yourself web-based questionnaire companies all make it much easier for a volunteer manager with only a fundamental background in quantitative evaluation methods to plan, design, and con- duct a valid and reliable survey design quantitative evaluation of a volunteer program using a face-to-face, mailed, e-mailed, or web-based questionnaire to collect impact data from targeted clientele.
Four Fundamental Questions in Any Volunteer Program Impact Evaluation 397
Connors, T. D. (Ed.). (2011). The volunteer management handbook : Leadership strategies for success. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2022-05-20 10:20:43.
C o p yr
ig h t ©
2 0 1 1 . Jo
h n W
il
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.