Find two instruments that assess constructs that you are considering for your dissertation.? For example, if you are measuring anxiety and depression, what i
Find two instruments that assess constructs that you are considering for your dissertation. For example, if you are measuring anxiety and depression, what instruments would you select to assess these constructs in your population?
- For this week's assignment, you will be describing how researchers (or the developers of the instruments) have demonstrated the validity of the measures in the population you will be using.
NOTE: This weeks' assignment should only address the validity of the instruments since their reliability was covered in the Week 5 assignment.
So we are now in week six, actually week six, sorry. So we are in week six. And obviously, we've been working on this new self -esteem instrument. So I'm going to talk a little bit more about that in terms of reliability and validity, the different steps we go through when developing an instrument. But before we go into that, I just wanted to talk about this week and next. So obviously, I rescheduled class today, and I wanted to just remind you all that the requirement for this week in terms of the discussion is your initial post due Wednesday and then just two responses. um technically it is a holiday weekend this weekend at Kaiser University um so the way that works is is always a little confusing um in the sense that technically I'm just trying to look at the calendar here it says Easter break is April 18th to April 21st um so I guess it's a good thing that I reschedule class next Monday. Class next week is going to be on Tuesday night instead at 6 30 p.m. Monday is technically Easter Monday so if you're trying to reach out for any services it may be somewhat limited between Friday through Monday because that is a Kaiser or university holiday. Of course, our classes still kind of go on. I do try to work around that and not hold a class on a holiday if there's a holiday. So next week, our class will be Tuesday at 630. That is a really good one to come to and important to come to if you can because we go over how to do a factor analysis. Hi, Natalie. No problem. Hi, Susanna and Catherine. I see you guys came to, I already said hi to Latasha. So thank you everyone for coming. I know it's not easiest when the time changes, but I just wanted to kind of reiterate that this is a holiday weekend. You do only have two responses. So if you want, you can do those in one day just before the holiday weekend. We do still have an assignment just because it's important to also understand how to do validity. You know, look for a validity in an article, but you don't have to have a super long paper. I'll talk about that. But, you know, it's mainly just that you're able to pull out and understand four instruments that you're considering. Are they reliable? That was the week five assignment. And are they valid? That's the week six assignment. So, you know, in some ways that could really be one assignment but it would a make it longer and b take the focus away from like really trying to break it down like focusing just on reliability versus focusing just on validity um so before we start digging into reliability and validity does anybody have any Any questions? You're a little bit in the background, but I didn't know if that was intentional or unintentional. Okay. If not, then I will just start to go into talking about what we have going on. So as you know, we've been working on a new instrument, our self -esteem instrument, and we've done internal consistency with that. We calculated Cronbach's alpha. Overall, you guys did a good job with that. And we also, you know, did, you know, some types of validity in terms of like when we were developing it. And then next week, we're going to do the factor analysis on that same instrument. So we're going through some of the different steps in terms of reliability and validity. So I'm going to just talk about the discussion topic for this week before I start diving into reliability and validity. So this week, you're going to be asked to describe the variety of ways that one can assess the validity of an instrument in terms of our self-esteem measure. So if you were going to create a new measure of self-esteem, talk about what the process is that you would use to validate this measure. so reliability you know as you know we've been we've been really dealing with reliability up to this point so i'm going to go to this little overview slide here reliability really measures the consistency of an instrument so reliability focuses on consistency and validity focuses on accuracy? Like, is it measuring what we say it's measuring? Like, is it really measuring self -esteem or whatever it is that you're focused on? So does the instrument measure what you intend to measure? So in terms of reliability, in week four, we dealt with internal consistency and we looked at how well the entire instrument hangs together. So you don't want to have part of it measuring self -esteem and some of it measuring self-efficacy or self -confidence. We really wanted to know, does this all hang together and measure self-esteem? And so we ended up coming up with our items and, you know, which ones really hung together and were consistent. Same with your sympathetic magic scale. You were able to look at that and see what the internal consistency was. In terms of test-retest, that would be similar scores if administered within a couple of weeks, and so it would look at stability over time. You know, is it consistent in what it's measuring over time? We also talked about alternative forms reliability, and that assesses the consistency across versions. So for both forms of an instrument, like if it's different forms of an SAT or ACT or whatnot, should be producing similar scores. That was actually kind of interesting. My older child, when he was a junior he is a senior now but when he was a junior in high school took the SAT several times and his scores were almost always exactly the same so that was kind of crazy but I guess that goes to show it was very consistent it was you know um you know because they say you can super score it now like get your highest verbal score and your highest math score but actually the last time he took it he got his highest of both. So he didn't end up super scoring, but it's just, it was interesting because his scores didn't vary very much each time he took it. So that showed that it had good alternate forms reliability. In terms of inter -rater reliability, that is, as we've discussed, when we have two or more different people? Do they agree or have similar scores in terms of their ratings, whether that's clinical ratings, performance ratings, like presence or absence of some behavior, like smiling or paying attention to the TV? Or, you know, do they have agreement of different behaviors. So those are the basic types of reliability. So now we're going to move on to validity, which is really looking at, does the instrument measure what it purports to measure? So if I'm going to say that I'm going to measure anxiety, I shouldn't be measuring depression instead, or a personality disorder. I should be truly measuring anxiety. So what are some different types of validity? Well, the first type is face validity. Does it measure what it should? And that's just like, if I look at it, if I'm an expert in anxiety, I could look at it and say, yeah that looks like it's assessing anxiety or no I think this is actually measuring something else so that's the simplest type of validity and you know if we look at it does it appear to measure what we say it's measuring and then like we also looked at all of our individual self-esteem measures and tried to make assessments like does it look like it's measuring self-esteem or not And that was part of the role that subject matter experts play. And we played subject matter experts, you know, earlier in the course and looked at all the different measures. But that's not always the strongest assessment. Another part of validity that subject matter experts take a look at is the content validity. Does it include all aspects of the content? of the construct or does it just look at part of it? So we don't want to just assess a snippet of self-esteem. We want to try to get, you know, the entirety or try to look at the bigger picture and not just have like one item or two items that just might only look at certain aspects of self-esteem and not all aspects. So the subject matter experts would take a look at that and try to help make sure we had, you know, kind of a good assessment of self-esteem. But there aren't really any statistics that are going to guarantee that. So that's kind of part of the reason that we use subject matter experts to help with this. Construct validity. Once we have our internal consistent reliability, which is what we calculated in week four, we also need to assess some other things. So one thing we would want to do is demonstrate construct validity, that is whether it measures the intended theoretical construct. And there are a few different ways we can do that. We can look at whether our instrument and another that's already been validated are correlated. So for example, we could look at whether if we had participants complete our self-esteem instrument and complete another self-esteem instrument like Rosenberg's, we could see are the scores highly correlated. And that would be one way of assessing whether our instrument is valid. but why do we create the self-esteem instrument maybe we can do better than rosenberg you know that's widely used but there are a lot of reverse scored items so um and we know that from more research that reverse scored items are detrimental to internal consistency so um we know there's a lot of research that's been done with rosenberg and so we know that that's a valid and reliable instrument, and we'd want ours to be correlated with that. We would want to know, yeah, ours actually is highly correlated with that. We could also do a factor analysis, which is what we're going to do in week seven. Does anybody here know what a factor analysis is? or have any experience with factor analysis does anybody um hi jc um i was just asking whether anybody knows what a factor analysis is or what if you know what a factor analysis does because that's um oh nice because we are going to be doing a factor analysis in week seven so what Why would we do a factor analysis? I don't know if anybody knows what one is, but if you know what one is, then that might give you the answer of why we would do a factor analysis. Is it because we would look at the items in the scale and we compare the items into how they load the content of what we are supposed to measure into that one item? So that's what they use when they actually revising the the items to get more accurate and more reliable and also valid. I don't know. Well, it doesn't, you know, exactly tell us about reliability. This is, you know, is a type of validity. So it would be telling us whether there are different subscales or different factors. right so maybe our self-esteem measure just measures you know we just add all the items together and it gives us a self-esteem measure but you know we don't know we don't know if it maybe it measures two different types of self -esteem like maybe there's two parts that hang together. Or, you know, there's particular subscales, like for example, with like these personality measures, they might have like the five personality types. So it doesn't just give us like a personality number. It's like, it gives us a number for neuroticism and extroversion and, you know, the different types of personality. So we're going to do a factor analysis because that way we'll be able to see with our self-esteem measure, do we have just like one factor or do we have more than one factor? So we can use that as part of our evidence for this type of validity construct, construct validity. I'm trying to point my arrow there. So is it measuring the intended concept? And of course, it could be measuring more than one type of self -esteem. So we're going to see if it's just one factor or multiple factors. The next type of validity that's important to look at is criterion validity. And this would be, does it correlate with relevant outcomes? So we're going to look at, you know, you would want to look at, does our instrument measure the construct that predicts some criterion, either currently, concurrently or in the future. So, you know, like the SAT, they want to use that to predict grades in college. So, you know, some colleges, you know, took away the SAT or ACT or made it test optional. Now, some of the schools are going back to that, particularly ones that are very STEM focused or, you know, or at least particularly for the STEM programs. So some schools like, I think MIT was one of the first ones. I don't know if Cal Poly did that, but I know Purdue did. They're very strong in like computer science and engineering. So some of the colleges, you know, some of the more, you know, the really hard to get into ones, of course, not every school is going to do that, but particularly for STEM, perhaps because they found that the students who went test optional for SAT or ACT maybe didn't do as well in those classes, and so it was a problem. So in our case, what do you think we would want self-esteem to predict or to be concurrently, positively related to? What do you think might be important for self-esteem to predict? Oh, I can't hear you. It looks like you're trying to speak, JC. Self-efficacy, maybe, or something like that. That might be. You know, if we're talking about. I mean, that's that's almost like a different type of self-worth. And I think it's like usually, you know, people with self high self -esteem usually do have higher self-worth, maybe lower levels of depression, maybe lower levels of anxiety. or, you know, maybe we'd want to look at it in relation to some social outcomes like better relationships, you know, some things like that. So I don't know, you know, you can check it out, check it out in terms of like, look at some of the publications and see what it correlates with. But certainly for your instruments that you're considering for your dissertation, it would be important to, you know, see what's reported for those instruments in terms of these different types of validity. And like I said, sometimes it's predictive, like the SAT predicting college performance. I thought that was very interesting that the STEM majors at those top schools were the first ones to go back, and it kind of makes sense. You know, there are obviously other ways they could predict that, like AP calculus scores, you know, the AP scores or something like that, but it may be that they needed some kind of cutoff or criteria. You know, sometimes there might be like business tests, you know, to get a job. There's sometimes these little, you know, assessments people have to do ahead of time. So, you know, perhaps there are ones that show certain outcomes and that's that are used, you know, for that. Another type of validity is convergent validity. And for this, the score should be significantly correlated with the score on a different kind of construct you would expect to be related. So, you know, like higher self esteem might relate to, like you said, JC, higher self-efficacy, because, you know, that's like a different construct, but related. So we would expect, or maybe it could be something like higher confidence or whatnot. Whereas discriminant validity is something we, it shouldn't be correlated with, right? So it would be something opposite that we don't expect to be related. So, um, for example, self-esteem isn't, we wouldn't expect it to be related to height, you know, it's like, because there's a lot of different, you know, psychological, um, variables. Um, but you know, we would, we would, we would, there's certain things that we would expect it wouldn't be related to. So discriminant validity would be things that we would expect to not be correlated because they're unrelated constructs. So in your discussion, so let me go back to my other sides. I'm going to go back to talking about our discussion. So in your discussion, which types of validity are most important for us to demonstrate for our new self-esteem measure? And how would we go about doing that? So be sure to talk a little bit about the process or the different steps that you would go through if you wanted to demonstrate the types of validity that you think are most important to demonstrate when you're creating a new measure. And obviously, these are standardized quantitative measures. So we're just focusing on our self-esteem measure for this discussion topic. Okay. I couldn't tell, JC, it looked like you might be trying to talk. I don't know if that was to me. Okay. All right. Okay, so moving on for our assignment, you should hopefully know what to do with this because you did it in week five, but for reliability, I would like you to, similar to week five, focus on two dissertation constructs that you're considering for your dissertation. For example, if you're measuring anxiety and depression, what instruments would you select to assess anxiety and depression? And you can use the same constructs as week five. But last week you focused on reliability. So this week you should focus just on validity of the instruments or the measures and the population you'll be using. So what's the evidence for the validity in those instruments and which types of validity? So again, this doesn't need to be a super long paper. You know, it should not be a 10-page paper. That would actually be, probably mean that it's kind of straying a bit from the topic. so um you know it really should just be focused on your constructs you know what you're what you're considering looking at and doing and um what's some instruments um reported in terms of validity so i would expect it would be you know just like you know two to three pages but you know there may be some variation there but again it should not be 10 pages should be a shorter paper because it should be focused on validity. All right. Any questions or comments before we wrap up? Again, I just want to thank you guys for coming. I am going to stay here for a few minutes in case anybody has questions, but I want to reiterate, it is a holiday weekend at Kaiser. it looks like Friday through Monday are a holiday on the Kaiser calendar. So it's a good thing we don't have class on Monday because technically that's Easter Monday. So we reschedule class next week to Tuesday at 630. That is a good one to attend because we will go over how to do the factor analysis. I will demonstrate it with our self-esteem instrument. And then for your assignment, you will be doing a factor analysis with the sympathetic magic scale data so um so you know it's good for you to know you know to see how i do it with a self-esteem instrument and then you'll be carrying it out on your own um but hopefully everybody has a good holiday weekend again there's only two responses required this week that can all be done in one day because I'm not expecting everybody to be in the classroom, you know, Friday through Sunday, especially because it's a holiday weekend. So you can even get all your discussion posts done by Thursday night if you're able to. But obviously we still we still do have an assignment, but hopefully because it's a shorter one, you know, it doesn't have to be too long. So, you know, you can hopefully, you know, get that done in a timely manner and enjoy the holiday, whether you celebrate Passover or Easter or, you know, just just have a day, you know, to yourself. All right. Well, thank you, guys. And again, I'll stay here for a few minutes to see if anybody has questions. Right. Thank you. Have a great day. You too. Bye bye. Thank you. Bye -bye. Yeah. Bye-bye. Thanks again. This time actually works well for me, so I'm glad I got to finally attend a live class. Oh, good. I'm glad. All right. Thanks for coming. Have a great Easter. Thank you, Dr. Smith. Me too. All right. Bye-bye. Bye. All right. Hi. Hello. Hi. thank you first i just want to say sorry for son for sending you messages on sunday i was
,
Direction |
Feedback |
Research Interest |
Topic is relevant and clearly stated (anxiety and resilience in college students) |
Subheadings |
There are no clear subheadings (e.g., “Anxiety,” “Resilience,” or “Instrument 1”), which affects clarity. |
Construct Definitions |
Constructs are mentioned but not clearly defined. For example, “resilience” isn’t described beyond what the CD-RISC measures. |
Instrument Identified |
Instruments (BAI and CD-RISC) are correctly identified and relevant. |
Scale Used |
For BAI and CD-RISC, you reported # of items but no scale description (e.g.,was it a Likert scale). |
Internal Consistency |
Alpha for CD-RISC is clear, but BAI’s alpha isn't mentioned, only test-retest r = .75 (which is good but should also have Cronbach’s alpha). |
APA Style |
Citations are present, but formatting is inconsistent (e.g., inconsistent parentheses, italics missing, and consistency in author year). |
Writing Clarity |
Writing is mostly clear but sometimes awkward phrasing; slight grammar issues |
,
3
Dissertation Instrument Selection
Barbara Maclure
Keiser University Online
Psychometrics
Dr, Kelly Schmitt
04/13/2025
Dissertation Instrument Selection
Establishment of proper validity and reliability measures forms an essential requirement in developing a dissertation focusing on psychological constructs to protect research findings. In my dissertation, I am considering assessing the constructs of anxiety and resilience in college students. When it comes to addressing and assessing these constructs there is a need to ensure that one uses standardized measures, in my case two constructs that will be used are “Beck Anxiety Inventory (BAI) together with the Connor-Davidson Resilience Scale (CD-RISC).” It is important to note that much research that has been done in the past does use these measures and this in the process demonstrates strong measurement properties and therefore are compatible with my own research.
Self-Report Beck Anxiety Inventory (BAI) research tool features twenty-one items to determine the intensity of anxiety symptoms. Research that has been done on this construct have shown that BAI displays remarkable consistency in assessment with clinical and non-clinical samples. Numerous studies involving college students reveal the BAI demonstrates strong reliability and validity through the correlation of r = .75 for one-week test-retest results according to Ismail et al. (2023). The tool works well for conducting anxiety measurements during short timespans thus making it an ideal tool when studying cross-sections or brief longitudinal designs. When used as a 25-item variant the Connor-Davidson Resilience Scale (CD-RISC) provides an evaluation of individual resilience capabilities related to stress and adversity manageability. Due to its perfection in measuring college student data the CD-RISC exhibits great internal consistency and excellent test-retest reliability scores exceeding .89 and .87 respectively (Rezaeipandari, 2022).
In considering the application of the above selected tools in my dissertation research, it is very important to address and acknowledge the issues that might come up because of missing data. Self-report questionnaires in psychological research led to more frequent instances of missing responses between participants. The presence of missing data has the potential to ruin valid findings. According to Howell (2013) the identification of missing data mechanisms remains crucial because it contains three categories known as “Missing Completely at Random (MCAR), Missing at Random (MAR), and Missing Not at Random (MNAR). MCAR defines an absence of relationship between “missingness” probability and observed or unobserved information.” On the contrary, MAR defines that the “missingness” may be related to observed variables but not the missing values, while MNAR defines that the “missingness” may be related to unobserved data and may lead to systematic bias.
Proper identification of the nature of missing data guides the choice of statistical techniques for overseeing them. For example, If the data are MAR, sophisticated techniques such as multiple imputation or FIML would be preferable since they are better at parameter estimation. To prevent missing data in the first place, pilot testing of instruments, attention checks, and easy-to-use survey design are essential strategies.
References
Howell, D. C. (2013). Statistical methods for psychology (8th ed.). Wadsworth Cengage Learning.
Ismail, N. H., Nik Jaafar, N. R., Woon, L. S. C., Mohd Ali, M., Dahlan, R., & Baharuddin, A. N. A. P. (2023). Psychometric properties of the Malay-version beck anxiety inventory among adolescent students in Malaysia. Frontiers in psychiatry, 13, 989079. https://www.frontiersin.org/articles/10.3389/fpsyt.2022.989079/full?amp;amp
Rezaeipandari, H., Mohammadpoorasl, A., Morowatisharifabad, M. A., & Shaghaghi, A. (2022). Psychometric properties of the Persian version of abridged Connor-Davidson Resilience Scale 10 (CD-RISC-10) among older adults. BMC psychiatry, 22(1), 493. https://link.springer.com/article/10.1186/s12888-022-04138-0
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.
