DESIGN A SURVEY: Researchers have long been interested in elements that comprise happiness. If you were to design a survey to determine levels of happiness, what would you include? T
DESIGN A SURVEY:
Researchers have long been interested in elements that comprise happiness. If you were to design a survey to determine levels of happiness, what would you include? That is, what elements do you believe combine together to equal happiness? These will become your building blocks (predictive variables) with happiness being the outcome (criterion variable).
500-700 words
APA (no cover, no abstract, ONLY references needed)
Counseling Research: Quantitative, Qualitative, and Mixed Methods
Second Edition
Chapter 7
Predictive Designs
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
If this PowerPoint presentation contains mathematical equations, you may need to check that your computer has the following installed:
1) MathType Plugin
2) Math Player (free versions available)
3) NVDA Reader (free versions available)
1
Learning Objectives (1 of 2)
7.1 Describe the nature of predictive designs.
7.2 Describe the relationship between correlation and prediction.
7.3 Identify the types of correlation coefficients and under what conditions each should be used.
7.4 Interpret the magnitude and frequency of a correlation coefficient and explain what they mean in terms of research design.
7.5 Define the terms predictor and criterion.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
2
Learning Objectives (2 of 2)
7.6 Explain the purpose of multiple regression and when it should be used.
7.7 List the assumptions of a multiple regression analysis.
7.8 Explain the purpose of factor analysis and when it should be used
7.9 List the various forms of factor analysis and explain how each affects the interpretability of you results
7.10 Describe how predictive designs can be applied to counseling research
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Predictive Designs (1 of 2)
Predictive designs are a form of correlational research that use calculated information about the relationships between variables to forecast future outcomes.
Researchers estimate the likelihood of a particular outcome by using a certain set of variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Predictive Designs (2 of 2)
In addition to identifying variables that will predict a given outcome, predictive studies may also be used to examine the validity of assessment instruments or treatment protocols allowing practitioners knowledge that the instrument or technique being used is implemented accurately.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Variables
Criterion Variables: the outcome variable being studied.
Predictor Variables: used to estimate the criterion.
A typical predictive design includes a single criterion variable and any number of predictor variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Correlation and Prediction
The relationship between the predictor and criterion variables is often explained using correlation.
Correlation is a statistical technique used to determine the degree of relationship between two or more variables.
Correlations are based on covariance, or the degree to which two variables vary together.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Correlation
To calculate the correlation between variables first collect data on each variable in its natural state.
Correlational design does not manipulate or control the variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Correlation Coefficient
Correlational analysis of data is known as the correlation coefficient.
The correlation coefficient is denoted by r.
Correlation coefficient describes the relationship between 2 or more variables or sets of scores.
If changes in the value of 1 variable corresponds to a systematic change in the value of the other they have shared variance.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Direction of the Relationship
The direction of the relationship is determined by the valence sign preceding the correlation coefficient value.
Positive correlation is denoted by “+” and negative correlation is denoted by “−”.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Positive Correlation
In a positive correlation both variables tend to move in the same direction.
The fact that the variables change in the same direction indicates a positive relationship.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Negative Correlation
When variables trend in opposite directions a negative correlation exists.
As one variable increases in value, the other decreases.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Degree of the Relationship (1 of 2)
The degree, or strength of a relationship is determined by the numeric value of the correlation coefficient.
It provides a measure of consistency and predictability found in the association between two scores.
(Gravetter & Wallnau, 2013).
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Degree of the Relationship (2 of 2)
Values for the correlation coefficient range from 0 to 1 in both the positive and negative directions.
A value of −1.00 represents a perfect negative correlation while a value of +1.00 represents a perfect positive correlation.
The strength of the relationship is based on how close the correlation coefficient is to the poles regardless of + or −.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Strength of Relationship
Correlation | Size of Association | Strength of Association |
.10 −.29 | Small | Weak |
.30 − .49 | Medium | Moderate |
.50 −.69 | Large | Strong |
.70 and above | Very large | Very Strong |
(Table: Rosenthal, 2001)
Because strength is related to how close the correlation approaches −1.00 or +1.00, correlation coefficient of −.90 indicates a stronger relationship than +.85.
A value of 0.00 indicates no relationship.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Coefficient of Determination
The coefficient of determination
is a measure of the
amount of variance in one variable that can be predicted from the other variable.
It is computed by squaring the correlation coefficient.
The larger the coefficient of determination the stronger the predictor is at estimating the criterion value.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Interpreting the Correlation Coefficient
Sometimes an apparent relationship may exist yet this relationship may not be significant.
The significance level of r provides a good measure of the consistency or reliability of the observed relationship.
The larger the sample the more reliable the correlation coefficient produced.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Spurious Correlations
A spurious correlation exists when an apparent relationship detected between two variables is really due to an unintended or confounding variable.
To reduce the occurrence of spurious correlation, it is recommended to use at least 100 participants when conducting correlational research.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Bivariate Predictive Models
The choice of design to use depends on the research questions the researcher is trying to answer and the type of data collected.
Of particular interest is whether the data is quantitative (interval or ratio) or categorical (nominal or ordinal).
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Pearson Product Moment Correlation Coefficient
The most commonly produced correlation coefficient.
Often referred to as simply the Pearson r.
A Pearson r is computed when the data for both
Measured variables is quantitative in nature and a linear relationship exists between them.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Pearson Product Moment Coefficients
Solve the formula:
Convert each individual score on both variables (X and Y) to standardized z scores.
Multiply the z scores computed for X and Y for each participant.
Sum the products from step 2.
Divide the value you obtain in step 3 by N.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Spearman Rho Correlation Coefficient
The Spearman Rho, a variant of the Pearson r, is applied when measuring the linear relationship between two sets of data, one of which is recorded at the ordinal level.
Ordinal data is rank ordered based on magnitude or frequency and scores are assigned a ranking indicating place in a distribution of scores.
The correlation is computed using the difference in ranks between measures for each participant rather than actual scores obtained.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Point Biserial Correlation Coefficient
Variant of the Pearson r, the Point Biserial r sub p b is used when one set of data represents a continuous quantitative measure and the other a categorical or nominal measure.
The categorical data is for a dichotomous variable which has only 2 points and is either/or such as male/female.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Phi Coefficient
The Phi Coefficient left parenthesis pie right parenthesis is computed when both sets of data are nominal, dichotomous measures and placed in a contingency table.
To compute the Phi coefficient, data from the two dichotomous variables are placed in a contingency table.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Contingency Table
Contingency tables are a visual aid for presenting the participant responses to one variable as a function of the other.
A positive association is noted when most of the data falls along the diagonal cells A and D and negative off of the diagonal cells B and C.
Blank | Variable X (−) | Variable X (+) | Total |
Variable Y (−) | A | B | A + B |
Variable Y (+) | C | D | C + D |
Total | A+C | B+D | A+ B+C+D |
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Multivariate Predictive Models
If two variables are highly related one variable may predict a pattern.
Prediction studies can represent the extent to which a relationship can be predicted and to test theoretical hypotheses related to predictors of a criterion.
May also be used to examine the predictive validity of assessment instruments.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Data Collection (1 of 2)
Subjects are selected that are pertinent to the study and based on availability to the researcher.
Instruments should be valid measures of the variable of interest.
Surveys, standardized tests, questionnaires, or observational methods can be used to measure the predictor variables and the criterion variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Data Collection (2 of 2)
The appraisal of the criterion variable must be valid.
The predictor variables must be measured before the criterion behavior pattern occurs in order to facilitate the claim that the measure predicted the pattern.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Data Analysis
The primary method of data analysis for a prediction study includes correlating the predictor variable with the criterion.
Because a grouping of variables usually results in a more accurate prediction than any one variable, studies often result in a predictive equation referred to as a multiple regression equation.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Multiple Regression (1 of 3)
A multiple regression equation uses all variables that independently predict the criterion to create a more accurate prediction.
Predicted scores are typically paced in a confidence interval.
Prediction equations may be formulated for each of a number of subgroups and a total group.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Shrinkage
Prediction studies can produce initial equations that may be the result of a chance relationship that will not be found again with another group of subjects.
Shrinkage is the tendency for predictive validity to decrease when the research study is repeated.
(Gall, Gall & Borg, 2006).
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Regression Analysis
The object of regression analysis is to help predict a single dependent variable from the collected data of one or more independent variables.
When a single independent variable predicts a single dependent variable the statistical technique is referred to as simple regression.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Multiple Regression (2 of 3)
Problems involving two or more independent variables predicting a single dependent variable is referred to as multiple regression analysis.
In a multiple regression equation, variables that are known to individually correlate with the criterion are used to make a more accurate prediction.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Multiple Regression (3 of 3)
Multiple regression is one of the more commonly used techniques in educational research.
Can be used with data representing any scale of measurement and can be used to interpret the results of experimental, causal comparative and correlational studies.
It determines the existence of a relationship and the extent to which variables are relational including statistical significance.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Collinearity
Collinearity is the relationship, or correlation between two independent variables.
Multicollinearity refers to the correlation between three or more independent variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Assumptions
Multiple regression shares all of the same assumptions of correlation:
Linearity of relationship
The same level of relationship throughout the range of the independent variable
Interval or near-interval data
Data whose range is not truncated.
(Black, 1999; Heppler, Kivlighan, & Wompold, 1999; Hair, Anderson, Tatha & Black, 1998)
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Hypothesis Testing (1 of 2)
To test hypotheses statistically, the following assumptions are made:
Independence: the scores for any particular subjects are independent of the scores of all other subjects.
Normality: In the population, the scores on the dependent variable are normally distributed for each of the possible combinations of the levels of the X variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Hypothesis Testing (2 of 2)
Homoscedacity: In the population, the variances of the dependent variable for each of the possible combinations of the levels of the X variables are equal.
Linearity: In the population, the relations between the dependent variable and an independent variable is linear when all other independent variables are held constant.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Terms (1 of 3)
Regression Coefficient: the numerical value of any parameter estimate that is directly associated with the I V.
Correlation Coefficient (R): indicates the strength of the association between the D V and I V.
Coefficient of Determination
measures the
proportion of the variation of the D V that is explained by the I V.
Dummy Coding: recoding categorical variables into a number of separate dichotomous variables.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Steps in a Multiple Regression Analysis
4 Step process:
The regression model will be determined through identification of the regression coefficients.
Determine the multiple correlation coefficient (R) and the proportion of shared variance
Determine the statistical significance of the multiple R.
Examine the significance of the predictor variables and test individual regression coefficients for statistical significance.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Types of Multiple Regression
Least-squares regression: the most common type of multiple regression.
Utilized when the measure of the criterion variable is a continuous scale, the measure of the predictor variables are continuous or categorical scales, and the relationship between the predictor variables and the criterion variable is linear.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Variations of Least-Squares Regression (1 of 2)
Step-up Multiple Regression: also called forward, the predictor that leads to the biggest increase in R is added to the existing group until the addition no longer leads to a statistically significant increase.
Step-down Multiple Regression: also called backward, all likely predictor variables are entered into the equation first, then systematically the variable that results in the lease decrease in R is removed until a statistically significant decrease occurs.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Variations of Least-Squares Regression (2 of 2)
Stepwise Multiple Regression: combines the forward and backward approach. Although popular, stepwise regression has been found to have significant problems in use resulting in incorrect calculations of variance
(Antonakis & Dietz, 2011; Thompson, 2013)
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Additional Analysis
Discriminant analysis: This is utilized when the measure of the criterion variable is categorical and the predictor measures produce continuous scores.
Logistic Regression: Used when the predictor measures are continuous or categorical and the measure of the criterion variable is dichotomous.
Nonlinear Regression: Used if a hypothesis exists that suggests a curvilinear relationship between the predictor variables and the criterion variable.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Cautions in Using Multiple Regression
The existence of a predictive relationship is not equal to a causal relationship.
Sample size is critical when choosing the number of predictor variables to be included in the study.
Rule of thumb: use a minimum of 15 participants for each variable included in the regression analysis.
(Gall, Gall, & Borg, 2003)
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Factorial Designs
A factorial experiment is a study determining the effect of two or more independent variables both singularly and interacting with each other on a dependent variable.
Involves two or more independent variables at least one of which is manipulated by the researcher.
Studies patterns of relationships among D V with the goal of discovering something about the I V that affect them without directly measuring the I V.
Copyright © 2017, 2008 Pearson Education, Inc. All Rights Reserved
Terms (2 of 3)
The effect of each of the I V on the D V is called the main effect.
The interaction of the effect of two or more variables on the D V is called the interaction effect.
A fixed factor is an I V whose value will not be generalized beyond the experiment.
Variance is a measure of the extent to which scores in a distribution deviate from the mean.
Copyright &cop
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.