For this assignment, conduct a reliability analysis of the internal consistency of items in SPSS (ANALYZE->SCALE->RELIABILITY ANALYSIS). ?You can use the def
The attached file is real data that is being used to create a measure of sympathetic magic. (It is the irrational connection of items that are believed to lead to harm/pleasure when they are not actually harmful/beneficial. This construct can be broken into two laws: the law of contagion objects that were once in contact influence each other through transfer of properties; and the law of similarity, objects that resemble each other share fundamental properties.) The measure is attached as well.
For this assignment, conduct a reliability analysis of the internal consistency of items in SPSS (ANALYZE->SCALE->RELIABILITY ANALYSIS). You can use the default settings of Cronbach's Alpha.
That was probably a little easier than the prior weeks because it's focused on missing data and less tied to your studies. But hopefully you're getting a little bit of a better sense of how you feel about using, you know, established instruments versus creating your own. It did take me longer to do the week to grading because I'm trying to give people feedback, especially if I notice, you know, issues with their instruments or their concepts. So each week I'm trying to kind of help you get a little closer to having something usable, at least for weeks five and six, where you are supposed to be having some clear constructs, having instruments that are reliable and valid. So I want to talk to us a little bit more about that before we get into the concepts for this week, because I want you to, you know, obviously when you have time, especially this week, because the assignment's pretty quick, you know, make sure you're spending some time, looking for instruments if you don't have them yet for your weeks five and six assignments. So I'm not spending a lot of time, you know, giving you feedback on something like your sample, although I might I might mention something if I'm concerned, like, how are you going to have access to that sample? Like as an example, I had a student who spent like a year trying to get to be able to do research with the military and the military has their own research teams so they just you know even though he had a lot of friends and had a lot of contacts it's just that's not going to go anywhere um i'm not i shouldn't say never say never but i just think that's probably not a realistic in a short time frame to try to be able to do um likewise i might make a comment if i'm not really clear are you these instruments that you would be doing with teachers or is it instruments you'd be doing with students or is it both you know because sometimes people would talk about both and you can potentially do both but again that might take longer what I'm mainly spending time on though but yes Frank that's right to IRB is the other thing I always say because I used to work with students on their master's thesis but also because besides dissertations, but now our master's students do a capstone project, so they don't have to go through the IRB and all of that. But sometimes people would want to do studies at their place of work. And sometimes companies or businesses are happy to give access and other times they're not. And so sometimes if you loosely talk to them about it, they might be like, oh, yeah, maybe but if you don't have it in writing and you don't get like that letter saying like giving irb pending irb approval like we give you permission to do it if they're not like super excited about it and you didn't talk to the top person like don't rely on being able to do it at your company or your homeless shelter or your foster care agency or whatever so you know i'm not spending a lot of time though i'm not focused on your sample i'm not even focused on your research design because what is this class about? Psychometrics, right? So we're going to, and if your study is something totally different where you're not going to use validated and reliable instruments, well, you're still going to have to do that in this class, right? But most of our students do do studies that use instruments that are validated and reliable. Like there might be a situation where you're doing an experiment and you're creating your own instrument or something like that. Or maybe some of your instruments are validated, but others aren't. For example, I have a student doing a proposal defense at the end of this week, and she is doing her study on yoga, and she was really interested in yoga practice and how that relates to mindfulness and stress. obesity. Well, so she actually has a few validated instruments, like related to stress and mindfulness. But then obesity, you know, we just calculate BMI with height and weight. You know, yoga practice, there's not really a validated instrument for that, but you can ask people questions about, like, how often they do yoga and things like that. So I'm not saying everything in your study has to be validated. But for this class, you are going to have to have instruments that are validated and reliable for your weeks, five and six experiments. And hopefully you can use them. And hopefully that'll be useful to you. But it's not set in stone. You don't have to use them for your dissertation. It's just an exercise. But again, it can put you ahead of the game if you can find some instruments that really work for you and that you're excited about. Does anybody have questions about that before we move on to, you know, what we're doing this week? This is unrelated, but I've been thinking about it so I can start asking permission our work. What was the minimum participant requirement that Kaiser requires? What do you mean the number of people? Participants, yeah, is it depending on the measurement that I'm using or is there a requirement? Yeah, well, it really depends on your power analysis. So I'm sorry because obviously there's a lot of you. I don't remember what you're interested in exploring. But I will say that if you're not doing an experiment, like if you're doing a survey, which, again, a lot of our students do surveys with validated instruments. I would say typically, you know, a power analysis if you have like three variables and you're doing regressions would probably put you at about 76 participants. And then we say, you know, add an extra 20% in case, you know, there's some issues with missing data and things like that. So I would try roughly, you know, 90 to 100. But again, you have to calculate your power analysis depending on your research design and your study. Now, I've had some students do experiments, and those are smaller. Those can be smaller. I've also had a student who I wasn't on, I wasn't her chair, but I was on her committee who did a kind of cool study in the workplace was like before and after training. and I can't remember what her sample was, but I think she actually had a decent sample. But I think in some of these studies where you might be looking at people at more than one time period, then maybe you don't meet quite as many people as well. So I know that's not a straight answer. That makes sense, but not as quite as many people, how much would that be compared to 100? let me be like 30 per group oh so if you're doing you're like an experiment where you're I don't know yet I'm just I have to present something at work so they'll let me even like I just an idea of what I would do right well I mean that's that's that's that's a really you know like an it depends kind of kind of question i mean it really kind of depends um i have an idea now that that's okay oh yeah i mean it could be 20 per group it could be 30 per group you know it really depends i really don't know unless you know we kind of hand more info um yeah i have an idea now okay yeah you know experiments are kind of totally different and and not totally different but they're different in terms of a lot of times the statistics that you're going to be using and that's why the power analysis matters but and sometimes you can kind of model that off of another study like i did research when i was a graduate student with kids and so you know then you might if especially if you have a lot of different conditions you might just have like 10 to 15 kids in a group but they have to be pretty close in age so that you could say like well this is my group of two-year-olds this is my group of three-year-olds, so on and so forth. You know what I mean? So, but yeah, I would say maybe look at another study that is doing something similar or, you know, like you might want to do, and that might give you an idea, too. Yeah, that makes sense. Thank you. Sure. If an instrument is being used with a certain population, does that exclude it from being used with another population? No, that's a good question. Now, I did see some people sometimes say, like, well, maybe I would need to create a new instrument because it hasn't been done with this particular population. And I don't, there are situations where that might be true, but it might just be the case that it hasn't been done with this population yet. and you could get additional metrics like showing that it can be used with this sample. For example, some of the bigger instruments are really widely used. They'll have, they'll report reliability and validity in like all kinds of different countries or with different ethnic groups or different age groups, you know. So if it's a really good instrument, then it might work with a lot of different groups. populations and samples. So I would say, you know, that generally it's best if you can use something preexisting that's already out there. You know, I guess that's what we would hope, you know, I would hope that most of you can come away with. Now, again, I've had students do experiments where they had to have questionnaires that were more specific to what they were studying. And I, even in one study I did probably, I don't know, 10 years ago at this point, I used an existing instrument, the Peabody Pitcher vocabulary test, but I was looking at vocabulary learning. This was a study through a Ready to Learn grant, so it was PBS. It was basically for PBS, but really the Department of Education. And, you know, so for these PBS shows that we wanted to see if kids were learning vocabulary words, well, you want to know if they're learning those actual vocabulary words. So I used a proven technique, the Peabody Pitcher vocabulary test, but, you know, created ones for these specific vocabulary words. are situations where you can modify an existing measure but again if there's something out there that's good like why reinvent the wheel you know okay thank you good a thank you very much okay um but yeah i think uh you know again you're not going to be like committed to doing any of these but I'm just, and if you guys have any questions about, you know, that you want to talk about, you can let me know. But obviously in your, typically in your dissertation, you would talk about which instruments you're going to use. And you might even talk about, like, why you didn't use other instruments. So, for example, if you have something like emotional intelligence, I would hope that you know, like, what the difference is between the different emotional intelligence. instruments that are out there or at least the really popular ones and justify why you use yours. You know, same with burnout or depression, self -esteem or whatnot. Now you don't need to go into that extensively if it's something like depression or stress, there's probably so many measures. But you want to pick a stress one that's tailored for your population. Like if you're looking at academic stress of college students, maybe not just use a general stress measure, but use an academic stress measure. I had a student who was really interested in physical self-efficacy, and she, I mean, there was other things she was interested in, but somebody on her committee really wanted her to use general self-efficacy. And I actually think in her particular case, because she was relating it to things like how active the kids were, they were high school students, if she had used a physical self-efficacy measure, she might have gotten better results. Like we just, you know, if you're trying to really, you know, you want it to be more specific if you're able to. But, you know, and that. particular study we found that girls like who had a I believe it was like higher GPAs had higher self-efficacy and the boys who had higher physical activity had higher self-efficacy so there were like gender and still interesting gender differences so in that sense using general self-efficacy was still interesting but again I think you just you also want to kind of keep in mind what you're really trying to measure like if you're trying to measure like the states or the traits or you know what it is you're trying to measure and so that's why in week one we talked about constructs like what's our definition like what are we trying to look at and then um week two just kind of thinking about like oh are we going to develop our own or is there something out there we could ready be using what are the pros and cons of doing that. And obviously, at that point, I didn't expect you to have all your instruments and be really evaluating them. But in weeks five and six, you will be, you know, do these instruments that you're looking at have reports of validity? Do they have reports of reliability? Like, so that you could back it up and have confidence that this is a good measure to use. Okay. All right. So I'm going to share my screen if nobody has any other questions and we can kind of just start talking about the topics for this week okay so um this week our discussion focuses on um looking at methods of assessing reliability So two particular types, test, retest, and alternate forms. And then a third type going about testing reliability of readers. So in this, you know, I just want you to discuss these different ways of assessing reliability. And then we will be focused on inter-observer reliability. for the assignment. So, you know, these are talked about in your book, but I'd like you to, you know, generally talk about a definition and give an example for each of these. In terms of the assignment, in the classroom, there's an attached file of data, real data, that's being used to create a measure of sympathetic magic, which is the irrational connection of items that are believed to lead to harm or pleasure when they are not actually harmful or beneficial. So this construct can be broken into two laws, the law of contagion, objects that were once in contact, influence each other through the transfer of properties, and law of similarity, objects that resemble each other share fundamental properties, and the measure is also attached. And so for that reliability analysis, you will be running an internal consistency test. So I'm going to go over how to do that in a minute. Let me just have a screen share for a minute because we want to see. Do we have to use SPS or can we use Jomobi? You can use Jemobi. I was just trying to see if I, sometimes Jomobi is like busy. And so it won't let, you know, it won't let me use it or demo it. But I was just going to, I want to just share, let me see if I can share my screen. So you can use either Jomovi or SVV. I'm trying to share it now. Let me see if this will work. So it looks like it's saying start session, but especially for those of you who have used Jomovi before, you're welcome to use it. The commands are very similar to SPSS. You know, you would just need to, you know, open up your data file. You can open up the data file that's in the classroom in Jomovi, and you should be able to do the analyses right from there. Have many of you used Jumovie before? I have not. Never ever. All right, well, let's, let's, I'm going to just going to go back to the classroom. Okay, so I'm just going to go back to finish you talking about the assignment. So with the assignment, and again, I'm going to demo this. There are two different folders to submit to. If you're in general psychology, make sure you submit to the general psychology folder. If you're in I.O. Psychology, submit to that folder. this is you will need to submit your data output file and have a short write-up but your write-up you know can just be like one page or less it doesn't you don't have to write up a lot this is like a very um simple analysis so and you still sorry you still want it on with the cover page and um references and all that other stuff too no reference Okay. No references are needed. I think this week would be the one exception where you don't need a cover page. But I only hesitate to say that because I don't want it to like, I don't want to get the message that you don't need a cover page. But this is kind of an exception. It's just a very brief. assignment and write -up. And I think I can easily grade this one just within Blackboard. Part of the challenge is that I often will download papers so that I can look at them offline. And then if people don't have a cover page, it gets very confusing. And also, that's just kind of a general protocol in the psychology department to have cover pages and to be using APA formatting. So it's just kind of good practice. So this week, I will not penalize you if you do not have a cover page. But all other weeks, you should make sure to have a cover page. Okay. Thank you. Sure. Okay, so I am going to do the demo in SPSS, but I want to just, I'm going to just try to um i will also try to at least like show you how to open it up in jimovie i think i think i can share my screen to do this but again jimovie doesn't always uh sometimes it will not work because it will say that it's busy because i don't have a paid version of jimovie if you have the paid version then it always works and some people can download it i think a lot of people like the desktop version, but because this is a work computer, I don't have administrative privileges. And for whatever reason, probably because of the administrative privileges, I can't get the desktop version to work. So I just said I wanted it out of this device. So I said, I'm going to open the file off of this device. And then you can see it just opens the data file right here and it's kind of nice you can see you know right in here this sympathetic magic file that you'll be working with for your assignment this week um and there's you know variables or gender age and then the different questions and again this sympathetic magic scale is just kind of a lot of weird stuff, maybe some icky stuff. I didn't pick it out, but it's kind of interesting, maybe something different. Okay, so I'm going to go back over to our data set. So for our data, we last week went through and picked out a number of items that we thought might be good for our self-esteem measure. So in this new data file, I have participant ID number, age, ***, and then some different self-esteem items. I did throw, in case we wanted to use them, I did throw in just a few, I think I took, maybe put one or two extra ones in here just in case so that we could run our analyses and make sure. we had enough items that we were interested in. But I think what I'll try to do is just initially run it just with the items we had, which was 13 items. What you do when you're running your analysis for Interobserver Reliability is just go to analyze, scale, and reliability analysis. and then you need to make sure that it's alpha because we are calculating Chromebacks alpha. In other words, inter -observer reliability. What you will do or need to do is you put in all the items that you want to assess. I'm going to take this out because I just ran it and was checking it to make sure this data file I created was working. um but so what we're going to do is we're going to put in these items so we have another number of items like the first item was i feel confident in my ability to handle life challenges um but i'm just going to i'm not going to read them all i'm just going to put them in here so i have four five six all right so i have my um 13 items. And what I'm going to do then is, again, I've checked that it has the defaults of alpha. That's what you want. And then for statistics, the one thing you do need to check is that you make sure you have the descriptives for the scale if the item is deleted. That's primarily what we're going to be looking at and using to determine if we want to keep an item or if we want to remove it. Now with Chromebacks alpha, a good Chromebacks alpha is typically above 0.8 or 0.9. I have gotten some district, you know, it really should be a number between 0 and 1. So the closer to 1 it is, just like a correlation, the stronger it is. So if it's above point eight or it's good and above point nine is excellent so that's what we're looking for if you have a very low number like point three or point two that means the items are not hanging together that would be kind of problematic so i'm going to click continue and okay all right so it's saying here that we have 189 participants and our chromebacks alpha is point So a 0.622 would be considered trying to remember what it is offhand. I think it's poor or poor or acceptable, but it's definitely not great, right? So we want to see if we can get higher than that. So let's look at each item. So it says if I were to delete this item, I feel confident in my my ability to handle challenges in life. I'm going to go look at this number, my Chromebex alpha, if deleted. If I were to take that item out, my Chromex alpha would go down to 0.476. So I don't want to do that, right? I don't want my Chromebacks alpha to get even closer to zero to get even worse. So I'm definitely going to keep that item in. That looks, you know, that looks like that's probably a pretty solid one. it's going to drop my Chromex alpha that much to take it out. If I check out, I believe that I'm a person of worth, at least as much as anyone else. It doesn't really make much of a difference. It's 0.614 versus 0.622. That's less than, you know, that's less than a 10th. So I'm not going to, you know, worry about that. That one's probably okay. I feel proud of my achievements. It has a really little. increase like 0.04 if I were to take that out. So again, not really much of a difference. I accept myself as I am. If I take that out, it says 0.624. So that's almost exactly the same. That's a difference of 0.002. So again, that one seems totally fine. If I took out, I feel that I have good qualities that also is not a real change it says chrome box alpha 0.476 well actually that one's saying that's probably a pretty good one right it's contributing something new because it is making a little bit of a difference it would make it worse if i took it out i believe that i'm a person of value it's a pretty small difference I'm deserving of respect goes down a little bit I can learn and grow from my mistakes goes down a little bit I take constructive criticism as a helpful tool well that's that one goes up quite a bit 0 .752 so I'm going to write that down in my file here. If I were to take this one out, it's going to go, it's going to increase my alpha to 0.752. So that's one I'm going to consider taking out. I take constructive criticism as a helpful tool. I believe my opinions and ideas are valuable. The Chromeback's alpha is 0.46. So again, that would make it worse if I took it out. I don't want to do that. I'm a good human. it would go up to 0 .754 if I took it out. So that's, again, a good increase. So I'm going to consider taking that one out. And then these next two don't look like big changes either. I believe I can accomplish anything I set my mind to. And I trust myself to make important life decisions. So the only two we have here that look like they're really. really making a big difference are these two, that I take constructive criticism as a helpful tool and I'm a good human. Now, we also want to think about if they add something unique. But maybe I am a good human is just kind of similar to some of the other ones we have. so you know you can either I think what we would want to do is try to take that one out and then if I take constructive criticism as a helpful tool we would maybe take that one out as well but I think I'm going to just because I take constructive criticism as a helpful tool is a little different I think we'll just do them one at a time so I'm going to say this is my first attempt you know to change it I'm not just take out this one that I'm a good human and see how much it changes my Chromebacks alpha so I'm going to go back to analyze scale reliability analysis and I'm just going to take out I am a good human and see what we get for our results so it does move up to what it told us it would, which was 0.754. But now it says if I take out constructive criticism as a helpful tool, my Chromex alpha will go above 0.8, which is what we want, right? We want to be above 0 .8. So it looks like we definitely should take out. I take constructive criticism as a helpful tool as well. So that's going to be my second, my second, you know, revision here. So I'm going to take out, I take a constructive criticism and see if this is going to make my instrument stronger to remove that item and rerun it. So now our Chromebex alpha is point 872. Now remember above point eight is good so we could you know technically stop right here if there aren't any issues but let's just take a look and see if any would make a really big difference if we were to take them out so the one that I see that could potentially help is this I believe I can accomplish anything I set my mind to um that is you know it would bring it up to 0 .892 and then we would have um just 10 items left but it's you know it's a difference it's close to 9 you know point 9 then but it's not a huge difference um do you guys how do you how are you guys What are you thinking about this? Like if we're at .87 and we have acceptable or good inner observer reliability, do you think it's good to just stop there? Or do you think we should be going closer to 0.9 for our inter -observer reliability? I feel as if you could improve, you should always improve, right? But wouldn't it just be up to the restrictions of our research itself? Yes, I mean, I think, you know, you definitely don't want to. It depends on how critical, like, that improvement is. like when I've had students, sometimes students, you know, I'll get their dissertation from being on their committee. And sometimes, you know, obviously they'll report the inter -observer reliability for the published authors, but then they have to publish it for their own study. So you'll have to do that, too, with your sample. And sometimes I'll see really low reliability and sometimes I think it's not even it might even just be an error like they didn't reverse score the items and obviously if you use an item instrument that has reverse scoring you have to go through that process or sometimes maybe it's just some item but like yeah if you need to get above maybe you do need to take out some items or not use some subscales if it's really poor or problematic, but I think what we want to, we don't always want to take it out if it's just a small improvement because we want to make sure we still have content validity, right? Like, is that item valuable for capturing our construct? Like, in this case, for self-esteem. So if we really said, well, we really want to get about 0.9 for excellent reliability or whatever it was, like, oh, we really want to give above point eight, then we might want to make that change and like pull that one out. But I think in this case, because it's so close and it's minor, I think I would probably be inclined to just leave it in and leave it at 11 items for now, at least, because we haven't done the factor analysis yet, which is what we're going to do in week, I believe it's in week seven. So that'll be another chance for us to see, like, oh, like, would we keep this item in or not? Does that make sense? Perfect. Yeah, thank you. Sure. Susanna, did you have a question or a comment? Yes, I just, I was thinking about the question, the number of items that you need, would it be better to have an even number versus an odd number? Mathematically, it will make more sense, but I don't know if in, you know, in statistical, psychometricical way that makes any difference. Yeah. You know, I don't know that there's a set answer on that. You know, whereas something like, you know, like we're talking about something like Likert scales, like I know we were talking about that with the self-esteem measure, it is considered good to have that middle point, you know, an odd number because that way, you know, people are able to, you know, give that middle point answer if they feel that way. Whereas when I worked in market research, we sometimes did studies where we would only have, say, a four point scale because you wanted to push people to answer. Like if I'm doing a study for Nickelodeon and they're testing a show when they want to know if something's working or not or if they should put it on the air, sometimes we might want to push people to be like, you know, with audience research, like, do they really like it or not like it? Like you might want to force them. But in social sciences and more academic research, we would tend to have like that middle point for Lycurt items and have an odd number. But for the number of items, I think this question came up last week. It didn't, you know, we don't really have a like, oh, you have to. It's better to have an odd or an even number or a certain number of items. It just really kind of depends. It's a little less cut and dry. Thank you. I'm just coming from the back from a reading today about the reliability that we have to read. And I think Klein said that more items means more reliability sometimes. Are he quoted somebody saying that? And that was my question. is like, do you have to have also the difference between even and odd, more, better and less, and how much more, 15 to 20, or, you know, yeah, it goes on. Yeah. Yeah, so I think there's just, I'm sure there's different viewpoints, like, depending on this, you know, the situation and the scenario. like I said, like, you know, there was a difference in if I'm, you know, doing research for a client where they're trying to decide, you know, how they want to use something or whether they should spend, you know, all this amount of money to, you know, put something on the air, there might be a different attitude than when, you know, as scientists were trying to like be neutral or, you know, not take one particular side and kind of get a different thing. So there's a little bit of. So there's all different, you know, just like I always kind of love statistics because it's, you know, there's kind of some clear ways to do things, but it's still like there's different points of view. It's like part art, part science. So it's not necessarily just one answer. I did just pull up the figure out how, you know, I don't use, I have to admit, I do not. I do not. I do not. use Jomovi very much myself. But I just looked, I just, just as we were talking, to see how I could use Jomovi for a reliability analysis. And the great thing I think about, another great thing about statistics is once you learn one program, they're all fairly similar unless you have to do programming when I was in graduate school and some of the classes I took for statistics because because I had a stats concentration. Some of those were a little more complicated where we had to do programming. But if you know something like SBSS or DeMovie, you can usually pretty much adapt the instructions to whatever else you're doing. So I did just see if you go into Factor up here and click on Reliability Analysis, you can then put in your items. I'm just going to, this is the assignment file, so I'm not going to do the whole thing because obviously that's for your assignment, but let's just say
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.
