Chapter 8: Quantitative Methods What elements of the checklist of questions for designing a survey method do you think are relevant? Justify your choice. Whe
Chapter 8: Quantitative Methods
- What elements of the checklist of questions for designing a survey method do you think are relevant? Justify your choice.
- When would a true experiment or quasi-experiment be more appropriate? Provide an example for each.
- Discuss the usefulness of pretesting, pilot testing, or field-testing a survey.
- How can response bias influence the outcomes of a study?
- Assume that a new intervention has been developed using a new approach to social skills training to help individuals with Asperger’s syndrome. How would you design an experiment to test the intervention? Consider the type of experiment, exposure, measurement, ordering,assignment to condition, and comparison groups.
Chapter 8
Quantitative Methods
*
Introduction
- Designing quantitative methods for a research proposal
- Survey and experimental designs
- Careful measurement, parsimonious variables, theory-guided
*
*
Defining Surveys and Experiments
- Survey design
Quantitative description of trends, attitudes, or opinions of a population
Testing association
Studying a sample of that population
*
Defining Surveys and Experiments
- Experimental design
Systematic manipulation of one or more variables to evaluate an outcome
Holds other variables constant to isolate effects
Generalize to a broader population
*
- The survey design
- The population and sample
- Instrumentation
- Variables in the study
- Data analysis and interpretation
*
Components of a Survey Method Plan
*
Components of a Survey Method Plan
Table 8.1 A Checklist of Questions for Designing a
Survey Study Plan
*
Components of a Survey Method Plan
Steps to be Taken in Data Analysis
Components of a Survey Method Plan
The survey design:
- Provide a purpose for using survey research
- Indicate why the survey method is preferred
- Indicate the type of survey design
Cross-sectional (data collected at one point in time)
Longitudinal (data collected over time)
- Specify the form of data collection – telephone, mail, Internet personal/group interviews) and rationale
*
Components of a Survey Method Plan
The population and sample:
- Identify the population including size and sampling frames
- Specify the sampling design
Single-stage
Multi-stage (clustering)
- Type of sampling
Probability
Nonprobability
*
Components of a Survey Method Plan
The population and sample:
- Indicate if the study involves stratification – ensuring specific population characteristics (e.g. gender) are represented
- Indicate number in the sample and procedure to determine
- Use a power analysis if you plan to detect significant associations
*
Components of a Survey Method Plan
The population and sample:
- Power analysis involves
Estimating the size of correlation
An alpha value (type I error rate)
A beta value (type II error rate)
Conducting the power analysis (e.g., G*Power)
- Conduct power analysis during planning
*
Components of a Survey Method Plan
Instrumentation:
- Name the survey instrument used to collect data
- Indicate how instrument was developed
- Describe the established validity scores from past use
Content validity
Predictive or concurrent validity
Construct validity
- Describe reliability of scores from past use
Internal consistency
Test-retest
*
Components of a Survey Method Plan
Instrumentation:
- When modifying or combining instruments, the original validity and reliability may not hold
- Include sample items from the instrument
- Indicate major content sections in the instrument
Cover letter
Items – demographics, attitude items, behavior items, factual items
Closing instructions
Type of scale for responses
*
Components of a Survey Method Plan
Instrumentation:
- Discuss pilot testing or field-testing
Rationale for plans
Content validity and reliability
Improve question
- Steps for administering for a mailed survey
*
Components of a Survey Method Plan
*
Table 8.2 Variables, Research Questions, and Items on a Survey
Components of a Survey Method Plan
Data analysis:
- Computer programs used for analysis
- Data analysis steps
Step 1. Number who did and did not respond
Step 2. Method to determine response bias
Step 3. Plan to provide descriptive analyses
Step 4. Calculate total scale scores
Step 5. Statistics and program for inferential statistical analyses
Step 6. Present results in figures or tables and interpret
*
Interpreting results and writing a discussion section:
- Report how the results answered the research question or hypothesis
- Practical evidence in terms of effect size and confidence interval
- Discuss implications
- Consistent with, refute, extent previous studies
*
Components of a Survey Method Plan
*
Components of a Survey Method Plan
Table 8.3 Criteria for Choosing Select Statistical Tests
Components of an Experimental Study Method Plan
- Experimental method plan
Participants
Variables
Instrumentation and materials
Experimental procedures
Threats to validity
Data analysis
Interpreting results and writing a discussion
*
Components of an Experimental Study Method Plan
Participants:
- Describe procedures for recruiting participants
- Describe the selection of participants as either
Random
Nonrandom (convenience)
- True experiment – individuals randomly assigned to groups
- Quasi-experiment – partial or no control over random assignment
*
Components of an Experimental Study Method Plan
Participants:
- May measure second predictor variables
- Conduct and report power analysis
- End with formal experimental design statement
“The experiment consisted of a one-way two-groups design comparing burnout symptoms between full-time and part-time nurses”
*
*
Components of an Experimental Study Method Plan
Example 8.4 True Experimental Designs
Components of an Experimental Study Method Plan
Variables:
- Specify the variables and describe in detail
Identify the independent variables
Include a manipulation check measure
Identify dependent variable
Identify other variables measured
- Participant demographics
- Measure variables that contribute noise
- Potential confounding variables
*
Components of an Experimental Study Method Plan
Instrumentation and materials:
- Describe the instrument(s) participants complete in the experiment
Development, items, and scales
Reliability and validity reports of past uses
- Thoroughly discuss materials used for the treatment
- Cover story to explain procedures if deception is used
*
Components of an Experimental Study Method Plan
Experimental procedures:
- Identify the type of experiment
Pre-experimental, true experiment, quasi-experiment, single subject design
- Identify the type of comparisons – within-group or between-subject
Provide a visual model to illustrate the research design used
- X = treatment
- O = observation
- R = random assignment
See Examples 8.2–8.5
*
Components of an Experimental Study Method Plan
*
Example 8.2 Pre-experimental Designs
Components of an Experimental Study Method Plan
*
Example 8.3 Quasi-experimental Designs
Components of an Experimental Study Method Plan
*
Example 8.4 True Experimental Designs
Components of an Experimental Study Method Plan
*
Example 8.5 Single-Subject Designs
Components of an Experimental Study Method Plan
Threats to validity:
- Internal validity – procedures, treatments, or experiences of the participants that threaten inferences in experiments
- External validity – drawing incorrect inferences from sample data to other persons, settings, situations
*
Components of an Experimental Study Method Plan
Threats to validity:
- Statistical conclusion validity – inadequate statistical power or violation of statistical assumptions
- Construct validity – inadequate definitions and measures of variables
*
Components of an Experimental Study Method Plan
History
Maturation
Regression
Selection
Mortality
Diffusion of treatment
Compensatory/resentful demoralization
Compensatory rivalry
Testing
Instrumentation
*
- Threats to internal validity
Components of an Experimental Study Method Plan
- Threats to external validity
Interaction of selection and treatment
Interaction of setting and treatment
Interaction of history and treatment
*
Components of an Experimental Study Method Plan
The procedure:
- Administer measures of the dependent variable or a variable closely correlated
- Assign participants to matched pairs
- Randomly assign one member of each pair to the control and experimental group
- Expose experimental group to the treatment
- Administer measures of dependent variables
- Compare performance of the experimental and control groups
*
Components of an Experimental Study Method Plan
Data analysis:
- Report descriptive statistics (e.g., means, standard deviations, ranges)
- Indicate inferential statistical tests (e.g., t test, ANOVA, ANCOVA, or MANOVA)
- Report confidence intervals and effect sizes in addition to statistical tests
- Use line graphs for single subject designs
*
Components of an Experimental Study Method Plan
Interpreting results and writing a discussion section:
- Interpret findings in light of hypotheses and research questions
- Whether supported or refuted
- Why results significant or not, literature
- Indicate implications
- Suggest future research
*
Summary
- The methodological approach to a survey or experiment
- Surveys – purpose, population and sample, instruments, relationship, research questions, items, analysis
- Experiments – identify participants, variables, instruments, type of experiment, validly, analysis
*
,
CHAPTER 8 QUANTITATIVE METHODS
We turn now from the introduction, the purpose, and the questions and hypotheses to the method section of a proposal. This chapter presents essential steps in designing quantitative methods for a research proposal or study, with specific focus on survey and experimental designs. These designs reflect postpositivist philosophical assumptions, as discussed in Chapter 1. For example, determinism suggests that examining the relationships between and among variables is central to answering questions and hypotheses through surveys and experiments. In one case, a researcher might be interested in evaluating whether playing violent video games is associated with higher rates of playground aggression in kids, which is a correlational hypothesis that could be evaluated in a survey design. In another case, a researcher might be interested in evaluating whether violent video game playing causes aggressive behavior, which is a causal hypothesis that is best evaluated by a true experiment. In each case, these quantitative approaches focus on carefully measuring (or experimentally manipulating) a parsimonious set of variables to answer theory-guided research questions and hypotheses. In this chapter, the focus is on the essential components of a method section in proposals for a survey or experimental study.
DEFINING SURVEYS AND EXPERIMENTS
A survey design provides a quantitative description of trends, attitudes, and opinions of a population, or tests for associations among variables of a population, by studying a sample of that population. Survey designs help researchers answer three types of questions: (a) descriptive questions (e.g., What percentage of practicing nurses support the provision of hospital abortion services?); (b) questions about the relationships between variables (e.g., Is there a positive association between endorsement of hospital abortion services and support for implementing hospice care among nurses?); or in cases where a survey design is repeated over time in a longitudinal study; (c) questions about predictive relationships between variables over time (e.g., Does Time 1 endorsement of support for hospital abortion services predict greater Time 2 burnout in nurses?).
An experimental design systematically manipulates one or more variables in order to evaluate how this manipulation impacts an outcome (or outcomes) of interest. Importantly, an experiment isolates the effects of this manipulation by holding all other variables constant. When one group receives a treatment and the other group does not (which is a manipulated variable of interest), the experimenter can isolate whether the treatment and not other factors influence the outcome. For example, a sample of nurses could be randomly assigned to a 3-week expressive writing program (where they write about their deepest thoughts and feelings) or a matched 3-week control writing program (writing about the facts of their daily morning routine) to evaluate whether this expressive writing manipulation reduces job burnout in the months following the program (i.e., the writing condition is the manipulated variable of interest, and job burnout is the outcome of interest). Whether a quantitative study employs a survey or experimental design, both approaches share a common goal of helping the researcher make inferences about relationships among variables, and how the sample results may generalize to a broader population of interest (e.g., all nurses in the community).
COMPONENTS OF A SURVEY STUDY METHOD PLAN
The design of a survey method plan follows a standard format. Numerous examples of this format appear in scholarly journals, and these examples provide useful models. The following sections detail typical components. In preparing to design these components into a proposal, consider the questions on the checklist shown in Table 8.1 as a general guide.
Table 8.1 A Checklist of Questions for Designing a Survey Study Plan
__________
Is the purpose of a survey design stated?
__________
What type of design will be used and what are the reasons for choosing the design mentioned?
__________
Is the nature of the survey (cross-sectional vs. longitudinal) identified?
__________
Is the population and its size mentioned?
__________
Will the population be stratified? If so, how?
__________
How many people will be in the sample? On what basis was this size chosen?
__________
What will be the procedure for sampling these individuals (e.g., random, nonrandom)?
__________
What instrument will be used in the survey? For each instrument, who developed it, how many items does it contain, does it have acceptable reliability and validity, and what are the scale anchors?
__________
What procedure will be used to pilot or field-test the survey?
__________
What is the timeline for administering the survey?
__________
How will the measures be scored and converted into variables?
__________
How will the variables be used to test your research questions?
What specific steps will be taken in data analysis to do the following:
(a) _______
Analyze returns?
(b) _______
Check for response bias?
(c) _______
Conduct a descriptive analysis?
(d) _______
Combine items into scales?
(e) _______
Check for reliability of scales?
(f) _______
Run inferential statistics to answer the research questions or assess practical implications of the results?
__________
How will the results be interpreted?
The Survey Design
The first parts of the survey method plan section can introduce readers to the basic purpose and rationale for survey research. Begin the section by describing the rationale for the design. Specifically:
Identify the purpose of survey research. The primary purpose is to answer a question (or questions) about variables of interest to you. A sample purpose statement could read: “The primary purpose of this study is to empirically evaluate whether the number of overtime hours worked predicts subsequent burnout symptoms in a sample of emergency room nurses.”
Indicate why a survey method is the preferred type of approach for this study. In this rationale, it can be beneficial to acknowledge the advantages of survey designs, such as the economy of the design, rapid turnaround in data collection, and constraints that preclude you from pursuing other designs (e.g., “An experimental design was not adopted to look at the relationship between overtime hours worked and burnout symptoms because it would be prohibitively difficult, and potentially unethical, to randomly assign nurses to work different amounts of overtime hours.”).
Indicate whether the survey will be cross-sectional—with the data collected at one point in time—or whether it will be longitudinal—with data collected over time.
Specify the form of data collection. Fowler (2014) identified the following types: mail, telephone, the Internet, personal interviews, or group administration (see also Fink, 2016; Krueger & Casey, 2014). Using an Internet survey and administering it online has been discussed extensively in the literature (Nesbary, 2000; Sue & Ritter, 2012). Regardless of the form of data collection, provide a rationale for the procedure, using arguments based on its strengths and weaknesses, costs, data availability, and convenience.
The Population and Sample
In the method section, follow the type of design with characteristics of the population and the sampling procedure. Methodologists have written excellent discussions about the underlying logic of sampling theory (e.g., Babbie, 2015; Fowler, 2014). Here are essential aspects of the population and sample to describe in a research plan:
The population. Identify the population in the study. Also state the size of this population, if size can be determined, and the means of identifying individuals in the population. Questions of access arise here, and the researcher might refer to availability of sampling frames—mail or published lists—of potential respondents in the population.
Sampling design. Identify whether the sampling design for this population is single stage or multistage (called clustering). Cluster sampling is ideal when it is impossible or impractical to compile a list of the elements composing the population (Babbie, 2015). A single-stage sampling procedure is one in which the researcher has access to names in the population and can sample the people (or other elements) directly. In a multistage or clustering procedure, the researcher first identifies clusters (groups or organizations), obtains names of individuals within those clusters, and then samples within them.
Type of sampling. Identify and discuss the selection process for participants in your sample. Ideally you aim to draw a random sample, in which each individual in the population has an equal probability of being selected (a systematic or probabilistic sample). But in many cases it may be quite difficult (or impossible) to get a random sample of participants. Alternatively, a systematic sample can have precision-equivalent random sampling (Fowler, 2014). In this approach, you choose a random start on a list and select every X numbered person on the list. The X number is based on a fraction determined by the number of people on the list and the number that are to be selected on the list (e.g., 1 out of every 80th person). Finally, less desirable, but often used, is a nonprobability sample (or convenience sample), in which respondents are chosen based on their convenience and availability.
Stratification. Identify whether the study will involve stratification of the population before selecting the sample. This requires that characteristics of the population members be known so that the population can be stratified first before selecting the sample (Fowler, 2014). Stratification means that specific characteristics of individuals (e.g., gender—females and males) are represented in the sample and the sample reflects the true proportion in the population of individuals with certain characteristics. When randomly selecting people from a population, these characteristics may or may not be present in the sample in the same proportions as in the population; stratification ensures their representation. Also identify the characteristics used in stratifying the population (e.g., gender, income levels, education). Within each stratum, identify whether the sample contains individuals with the characteristic in the same proportion as the characteristic appears in the entire population.
Sample size determination. Indicate the number of people in the sample and the procedures used to compute this number. Sample size determination is at its core a tradeoff: A larger sample will provide more accuracy in the inferences made, but recruiting more participants is time consuming and costly. In survey research, investigators sometimes choose a sample size based on selecting a fraction of the population (say, 10%) or selecting a sample size that is typical based on past studies. These approaches are not optimal; instead sample size determination should be based on your analysis plans (Fowler, 2014).
Power analysis. If your analysis plan consists of detecting a significant association between variables of interest, a power analysis can help you estimate a target sample size. Many free online and commercially available power analysis calculators are available (e.g., G*Power; Faul, Erdfelder, Lang, & Buchner, 2007; Faul, Erdfelder, Buchner, & Lang 2009). The input values for a formal power analysis will depend on the questions you aim to address in your survey design study (for a helpful resource, see Kraemer & Blasey, 2016). As one example, if you aim to conduct a cross-sectional study measuring the correlation between the number of overtime hours worked and burnout symptoms in a sample of emergency room nurses, you can estimate the sample size required to determine whether your correlation significantly differs from zero (e.g., one possible hypothesis is that there will be a significant positive association between number of hours worked and emotional exhaustion burnout symptoms). This power analysis requires just three pieces of information:
An estimate of the size of correlation (r). A common approach for generating this estimate is to find similar studies that have reported the size of the correlation between hours worked and burnout symptoms. This simple task can often be difficult, either because there are no published studies looking at this association or because suitable published studies do not report a correlation coefficient. One tip: In cases where a published report measures variables of interest to you, one option is to contact the study authors asking them to kindly provide the correlation analysis result from their dataset, for your power analysis.
A two-tailed alpha value (α). This value is called the Type I error rate and refers to the risk we want to take in saying we have a real non-zero correlation when in fact this effect is not real (and determined by chance), that is, a false positive effect. A commonly accepted alpha value is .05, which refers to a 5% probability (5/100) that we are comfortable making a Type I error, such that 5% of the time we will say that there’s a significant (non-zero) relationship between number of hours worked and burnout symptoms when in fact this effect occurred by chance and is not real.
A beta value (β). This value is called the Type II error rate and refers to the risk we want to take in saying we do not have a significant effect when in fact there is a significant association, that is, a false negative effect. Researchers commonly try to balance the risks of making Type I versus Type II errors, with a commonly accepted beta value being .20. Power analysis calculators will commonly ask for estimated power, which refers to 1 − beta (1 − .20 = .80).
You can then plug these numbers into a power analysis calculator to determine the sample size needed. If you assume that the estimated association is r = .25, with a two-tailed alpha value of .05 and a beta value of .20, the power analysis calculation indicates that you need at least 123 participants in the study you aim to conduct.
To get some practice, try conducting this sample size determination power analysis. We used the G*Power software program (Faul et al., 2007; Faul et al., 2009), with the following input parameters:
Test family: Exact
Statistical test: Correlation: Bivariate normal model
Type of power analysis: A priori: Compute required sample size
Tails: Two
Correlation ρ H1: .25
α err prob: .05
Power (1 – β err prob): .8
Correlation ρ H0: 0
This power analysis for sample size determination should be done during study planning prior to enrolling any participants. Many scientific journals now require researchers to report a power analysis for sample size determination in the Method section.
Instrumentation
As part of rigorous data collection, the proposal developer also provides detailed information about the actual survey instruments to be used in the study. Consider the following:
Name the survey instruments used to collect data. Discuss whether you used an instrument designed for this research, a modified instrument, or an instrument developed by someone else. For example, if you aim to measure perceptions of stress over the last month, you could use the 10-item Perceived Stress Scale (PSS) (Cohen, Kamarck, & Mermelstein, 1983) as your stress perceptions instrument in your survey design. Many survey instruments, including the PSS, can be acquired and used for free as long as you cite the original source of the instrument. But in some cases, researchers have made the use of their instruments proprietary, requiring a fee for use. Instruments are increasingly being delivered through a multitude of online survey products now available (e.g., Qualtrics, Survey Monkey). Although these products can be costly, they also can be quite helpful for accelerating and improving the survey research process. For example, researchers can create their own surveys quickly using custom templates and post them on websites or e-mail them to participants to complete. These software programs facilitate data collection into organized spreadsheets for data analysis, reducing data entry errors and accelerating hypothesis testing.
Validity of scores using the instrument. To use an existing instrument, describe the established validity of scores obtained from past use of the instrument. This means reporting efforts by authors to establish validity in quantitative research—whether you can draw meaningful and useful inferences from scores on the instruments. The three traditional forms of validity to look for are (a) content validity (Do the items measure the content they were intended to measure?), (b) predictive or concurrent validity (Do scores predict a criterion measure? Do results correlate with other results?), and (c) construct validity (Do items measure hypothetical constructs or concepts?). In more recent studies, construct validity has become the overriding objective in validity, and it has focused on whether the scores serve a useful purpose and have positive consequences when they are used in practice (Humbley & Zumbo, 1996). Establishing the validity of the scores in a survey helps researchers to identify whether an instrument might be a good one to use in survey research. This form of validity is different from identifying the threats to validity in experimental research, as discussed later in this chapter.
Reliability of scores on the instrument. Also mention whether scores resulting from past use of the instrument demonstrate acceptable reliability. Reliability in this context refers to the consistency or repeatability of an instrument. The most important form of reliability for multi-item instruments is the instrument’s internal consistency—which is the degree to which sets of items on an instrument behave in the same way. This is important because your instrument scale items should be assessing the same underlying construct, so these items should have suitable intercorrelations. A scale’s internal consistency is quantified by a Cronbach’s alpha (α)value that ranges between 0 and 1, with optimal values ranging between .7 and .9. For example, the 10-item PSS has excellent internal consistency across many published reports, with the original source publication reporting internal consistency values of α = .84–.86 in three studies (Cohen, Kamarck, and Mermelstein, 1983). It can also be helpful to evaluate a second form of instrument reliability, its test-retest reliability. This form of reliability concerns whether the scale is reasonably stable over time with repeated administrations. When you modify an instrument or combine instruments in a study, the original validity and reliability may not hold for the new instrument, and it becomes important to establish validity and reliability during dat
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.