What are some of the differences in using official crime data as discussed in Chapter 3 of the Mosher textbook and Self-Reported Data in Chapter 4? What are the particular stren
MUST BE at least 250 words at least 3 scholarly citations in APA format. Any sources cited must have been published within the last five years. Acceptable sources
include the textbook, the Bible, and scholarly peer-reviewed research articles. CHAPTER 3 AND CHAPTER 4 ARE ATTACHED
After reading Chapter 4 of the Mosher textbook, what are some of the differences in using official crime data as discussed in Chapter 3 of the Mosher textbook and Self-Reported Data in Chapter 4? What are the particular strengths and weakness of each of these types of data sources?
The Mismeasure of Crime
Mosher, Clayton; Miethe, Terance D.; Hart, Timothy C.
CHAPTER 4
SELF-REPORT STUDIES
Respondents are a tricky bunch, and they do not always behave the way a researcher would wish or expect. In fact, surveys would be far more reliable without them.
—Coleman & Moynihan (1996, p. 77)
Self-report studies of crime were developed in the 1940s and 1950s, largely in response to concerns among criminologists that official measures of crime were systematically biased and provided a distorted picture of the nature and extent of crime and its correlates.
One of the primary advantages of self-report studies is that the information individuals provide regarding their behavior is not filtered through any official or judicial process. The criminal justice funnel, which illustrates how, at each stage of the system, fewer and fewer illegal behaviors are siphoned off for official crime counts, does not operate with respect to self-report data.
However, what individuals tell us about their behavior may or may not be a reliable and valid source for determining how involved they are in criminal activity. Memories of events—even of ones as dramatic as criminal episodes— may be fuzzy rather than clear, especially when it comes to recollecting the time period in which they occurred or the sequence of their occurrence. The questions that researchers ask may be phrased in ways that are different from the way people think of their behavior. For example, asking “In the past six months, have you abused or aggressed against a family member?” may elicit a different response than “In the past six months, have you slapped, hit, or punched anyone in your house?” Even with good nonjudgmental questions,however, respondents may be reluctant to answer fully and truthfully—at least partially because of the fact that they are being asked to admit to behaviors that might result in their arrest if the actions became known to authorities.
One purpose of this chapter is to make you a savvy consumer, as well as evaluator, of self-report measures of crime. Because of its importance to both self-report and victimization data, we begin with a brief discussion of survey methodology. We then review the methodology and findings of some of the more prominent self-report studies, including the National Youth Survey (NYS), the National Longitudinal Study of Adolescent Health (Add Health), the Monitoring the Future (MTF) Survey, and the National Household Survey on Drug Abuse (NHSDA, now known as the National Survey on Drug Use and Health [NSDUH]). This is followed by a review of a prominent and enduring debate in the discipline of criminology regarding the connection between social class and crime, a debate that led to further refinements and improvements in self-report methodology. We then discuss self-report data from known offenders, which have provided particular insights into the crime patterns of individuals who have been apprehended by the criminal justice system. The chapter concludes with an examination of studies focusing on the reliability of self-reported data on drug use.
THE METHOD BEHIND THE MEASURE
Self-report measures of crime are subject to the same constraints found more generally in survey research. Criticisms regarding the adequacy or accuracy of self-report as well as victimization data have as much, if not more, to do with how the data are collected than with what those data might tell us about crime and criminals. In order to anticipate and understand these criticisms, we briefly address the sources of error in survey research before discussing specific self-report studies of crime.
Sources of Survey Error
At the core, evaluating any survey and the data derived from it revolves around two central issues (Phillips, Mosher, & Kabel, 2000):
Were the right people asked the right questions?
Did they answer truthfully?
In survey research terminology, these two issues expand into the four sources of total survey error: coverage error, sampling error, nonresponse error, and measurement error (see, e.g., Dillman, 2000; Groves, 1989, 1996; Junger-Tas & Marshall, 1999; Salant & Dillman, 1994).
Coverage error means that researchers selected individuals from a list— a sampling frame—that did not include all the people they intended to study: the target population. To illustrate this principle, consider the following example. A researcher is interested in determining how welfare recipients feel about their encounters with social service and criminal justice agencies. They have access to a current list of welfare recipients in their state, from which they will select a sample. In this example, there are already two limits on coverage: only people who (1) receive welfare as of a certain date and (2) live in the state can be included in the survey and described by its results.
There is an additional limitation imposed by the survey mode used to obtain information from the sample members. A telephone survey may be expedient, but a high percentage of welfare recipients may not have telephone service. A mail survey would likely include most, if not all, welfare recipients, but literacy may be a problem in enough cases to contribute to two other sources of error: nonresponse and measurement (which will be discussed later). A face-to-face survey could address the deficiencies of either of the other two modes but at great cost in terms of time and money. As this example illustrates, coverage error can be relatively easy to identify but not easy to correct.
Sampling error is an automatic, unavoidable result of surveying a subset, rather than taking a census, of all the people in the target population. This is the source of survey error that is referred to when journalists report that a political exit poll or public opinion survey has a “margin of error plus or minus five points.” It means that it can be estimated, by using a well-established statistical formula, how closely the survey sample mirrors the target population. Although it is rarely reported by journalists, sampling error is estimated within a specified confidence level that indicates how sure we are about the estimate. For example, if a survey’s sampling error is estimated as +/- 5 percentage points at the 95% confidence level, we can be confident that 95 times out of 100, the percentage of sample members who gave a certain response will be within 5 percentage points either way of the true percentage in the target population who would give that response if asked. Unfortunately, confidence levels apply only to predictions in the long run (referred to as an infinite number of trials); any particular sampling outcome may fall within or outside of the specific range of the 95% confidence level. Although sampling error cannot be completely eliminated from surveys, it can be reduced by increasing the sample size and obtaining responses from a larger proportion of people in the target population.
Nonresponse error affects survey data when both of the following are apparent: (a) too many people in the sample did not respond to the survey, either because they could not be contacted via the survey mode or they refused to participate; and (b) the nonrespondents differ from respondents in ways that are important to the objectives of the survey. Why both conditions must hold is easily illustrated. Consider a survey in which researchers complete interviews with 70% of their sample, a quite respectable response rate for social surveys. Most of the respondents have brown eyes, whereas most of the nonrespondents have hazel, green, or blue eyes. Is this survey plagued by nonresponse error? The answer to this question depends on the research questions. If the researcher is interested in the relative sun-sensitivity reported by people with different eye colors or their preferences for using contact lenses of different hues, then non-response likely is a problem. Even though there is a high response rate to the survey, respondents differ from nonrespondents on a variable of potential interest—eye color. If, on the other hand, the researcher is interested in attitudes toward capital punishment, nonresponse on the basis of eye color would not be a source of error because eye color is not germane to this issue.
To return to the earlier example of surveying welfare recipients regarding their encounters with social service and criminal justice agencies, assume that the researcher obtains an 80% response rate. However, more than 60% of the respondents are female, and more than 70% of the nonrespondents are male. Nonresponse error constitutes a serious problem in this case because gender is a factor not only in the number but also in the character of contacts with social service and criminal justice agencies. It is possible for a survey with a 99% response rate to be subject to nonresponse error if the 1% who did not respond differ in predictably significant, substantive ways from the nonrespondents. Likewise, a survey with a response rate of only 40% may be immune to non-response error if the nonrespondents are similar to respondents in ways that might make a difference in analyzing the data from the survey.
As mentioned in Chapter 1, the process of operationalization involves attaching meaning to abstract concepts and developing specific indicators and measures of those concepts. How researchers decide to measure these concepts, the nature and number of indicators that are used to identify them, and the specific wording used to define them are all sources of measurement error. In evaluating measures of any phenomenon, social scientists are concerned with issues of validity and reliability.
Validity and Reliability
Validity is the degree to which a measure captures what it is intended to measure: If a measure is valid, it is true and accurate. Some measures have prima facie validity (i.e., clear or self-evident; often called face validity). Other measures possess validity only for specific cases and within strictly defined boundaries. Consider this example. Which of the following is more valid as a measure of the physical stature of human beings: (a) height and weight as recorded by physicians at routine physical exams or by coroners at an autopsy, (b) sizes of clothing most frequently purchased from the inventories of top- or bottom-tier manufacturers and department stores, (c) dimensions of seating and lavatory areas in commercial airplanes, or (d) observations of, and conversations with, people at public events or on the streets at rush hour? The first option—height and weight as recorded by a medical practitioner—does seem to have face validity for measuring physical stature, but the other three options have fairly obvious limitations when it comes to measuring what is intended. However, people who have physical exams or those whose deaths require an autopsy may not be representative of all human beings. Thus, even the validity of what appears to be the most accurate measurement can be compromised by an inadequate or biased sampling frame. We will return to these threats to validity after defining the second criterion for evaluating any measurement.
Reliability is the extent to which the same results are obtained each time a measure is used. If something is a reliable measurement, then it is a precise, consistent, and dependable one. A bathroom scale that showed an individual having three different weights on three different occasions over a 10-minute period would not be reliable. In the case of self-report studies of criminal and deviant behavior, reliability refers to the ability of the procedure used and questions asked to generate consistent responses from the same respondents on repeated administrations. For example, if individuals are asked whether they have ever stolen something and they answer yes, then they should answer yes the next time they are asked the same question. But just as all squares are rectangles but not all rectangles are squares, all valid measures are reliable ones, but not all reliable measures are valid.
In survey research, threats to reliability and validity (i.e., measurement error) derive from any of four aspects of the study (see, e.g., Aquilino, 1994; Aquilino & Wright, 1996; Dillman & Tarnai, 1991; Dykema & Schaeffer, 2000). The survey mode, whether it is telephone, mail, face-to-face, or Internet or web based, may result in different answers to the same question, even when posed to the same types of respondents. For example, studies have found that respondents are more likely to report drug use on self-administered answer sheets than in face-to-face interviews (Harrison, 1997; also see discussions later in this chapter). The survey instrument may include questions with categories that are not mutually exclusive or with terms that are not interpreted the same way by different respondents. The survey interviewer may unintentionally prompt a particular response by either attempting to clarify the meaning of a question (resulting in leading the respondent) or by giving the impression that a particular response is correct or expected (resulting in a socially desirable answer from the respondent). Finally, the survey respondent may misunderstand the question, may feel that the question is too nosy and prying, or may just plain lie. All of these conditions will result in mistakes in measurement.
To restate the sources of survey error in the context of the two key questions regarding research (i.e., Were the right people asked the right question? Did they answer truthfully?), if the coverage of the target population is inadequate or the sampling strategy is inappropriate, or the nonresponse rate jeopardizes either of them, the right people have not been asked the right question. If the measurement strategy elicits responses that are imprecise or might be inaccurate or cannot be compared to others, then the data do not allow us to determine if the respondent is telling the truth.
SELF-REPORTS ON CRIME AND DELINQUENCY
Chapter 2 documented that most of the early self-report measures of crime and its correlates were intended to discover, document, and describe the true dimensions—or dark figure—of crime. Some researchers believed that there was a great deal of illegal behavior that was not captured by official statistics. Rather than taking the official statistics at face value, they attempted to learn about criminal activities directly from the individuals who were engaging in them, whether or not those activities were detected by law enforcement.
The work of James Short and Ivan Nye, briefly discussed in Chapter 2, serves as an instructive example of both the strengths and weaknesses of self-report data on illegal activities (see Nye & Short, 1957; Short, 1955, 1957; Short & Nye, 1957–1958, 1958). Non-institutionalized adolescents was the targeted population for these researchers, and “because they seem likely to be more representative of the general population than are college or training school populations,” Short and Nye (1958, p. 297) drew their samples from public high schools, administering an anonymous questionnaire to these students. Exhibit 4.1 lists the items included in Short and Nye’s questionnaire that were designed to measure the youths’ involvement in delinquent and criminal activities. From responses to the questionnaire, Short and Nye (1958) drew the following conclusions, among others: (a) delinquent conduct in the non-institutionalized population is extensive and variable; (b) self-reported delinquent conduct is similar to official delinquency and crime in that boys admit committing nearly all delinquencies more often than do girls, and the offenses for which boys and girls are most often arrested are the ones they admit to committing most often; and (c) self-reported delinquent conduct differs from official statistics in that delinquency is distributed more evenly throughout the socioeconomic classes of non-institutionalized populations, whereas official cases are concentrated in the lower economic strata.
There are, however, a number of questions that can be raised regarding Short and Nye’s work. First, are students enrolled in high school likely to be representative of the general youth population? What about dropouts and other young people who might have been absent for one reason or another on the day(s) the questionnaire was administered? It is likely that such individuals are more prone to be involved in criminal and delinquent behavior. Second, many of the behaviors listed in the questionnaire are not described in legalistic, criminal terms. One of the many challenges associated with obtaining valid and reliable self-reports and comparing these to official data is translating the reported behaviors into categories consistent with those in sources such as the UCR. Third, and in a similar vein, many of the items included on the Short and Nye (1958) questionnaire are oriented toward the less serious end of the crime scale. The fact that many self-report instruments focus on relatively trivial behaviors, such as skipping school and defying parents’ authority, has become an enduring criticism of self-report studies.
Despite these shortcomings, Short and Nye’s work was important in the sense that it revealed that a considerable amount of crime and delinquency was not officially recorded. And much of this hidden delinquency was apparently committed by young people from relatively privileged backgrounds; Short and Nye found few social class distinctions in either the range or frequency of involvement in self-reported illegal activities. As a result, “Short and Nye’s work stimulated much interest in both the use of self-report methodology and the substantive issue concerning the relationship between some measure of social status (socioeconomic status, ethnicity, race) and delinquent behavior” (Thornberry & Krohn, 2000, p. 37).
Literally hundreds of self-report surveys that have been conducted in the past 60 years under the auspices of a variety of government agencies, academic institutions, and individuals largely confirm the findings from the earliest self-report studies of crime. However, as we will discuss in more detail later, several more recent studies—using more sophisticated methods, instruments, and analyses— have challenged the conclusions regarding little or no association between social class variables and involvement in delinquent and criminal behavior. In the following section, we describe four surveys, each of them national in scope and each of which have been used in numerous published studies, that arguably are standard bearers for collecting and analyzing self-report data. Not only do these provide self-report data on involvement in illicit activities, they also form the basis for research on and debate about techniques for improving the quality of self-report data. Two of the surveys (NYS and Add Health) measure both criminal and delinquent behavior in addition to the use of controlled substances. The other two (MTF and NSDUH) are focused on issues related to the use and abuse of legal and illegal substances.
National Youth Survey
First conducted in 1977, the NYS was designed specifically to provide both prevalence and incidence estimates of the commission of delinquent activities by youth. It is a longitudinal survey that uses a national-probability-based sample of young people who were 11 to 17 years old at the time of the first interview (Elliott, Huizinga, & Morse, 1986). Participants in this study were interviewed in their homes at one-year intervals through 1981 and at two- to three-year intervals at least through 1995. More than 90% of the original 1,725 participants have remained in the survey over time.Exhibit 4.2 provides a list of some of the questions used in the NYS.
Confidential, face-to-face interviews solicit information on the number of times the respondent has engaged in a specific delinquent or criminal activity within the past calendar year, with two different response sets used. If an individual’s response to an open-ended question indicates they have engaged in the particular activity more than 10 times, the interviewer asks the youth to select one of the following responses: (a) once a month, (b) once every 2 to 3 weeks, (c) once a week, (d) 2 to 3 times a week, (e) once a day, or (f) 2 to 3 times a day. Although described in nonlegalistic terms, the 47 activities asked about directly parallel offenses listed in the FBI’s Uniform Crime Report. Of the Part I offenses, only homicide is excluded; about 75% of Part II offenses are included, along with a wide range of misdemeanors and status offenses.
Exhibit 4.3 presents data on prevalence and incidence rates of self-reported offending for the first five waves of the NYS. Because the NYS is a longitudinal survey, the panel of respondents reporting on their behavior for 1976 is the same group of people reporting for 1980, and this is why the age range is different for each of the five years. With respect to prevalence rates (i.e., the percentage of respondents who report having engaged in certain types of crime) for felony assault and theft, both whites and blacks report lower involvement for 1980 than for 1976, and their self-reported rates of involvement in these offenses are nearly identical. For general delinquency, whites report a slightly higher involvement and blacks report a slightly higher involvement for 1980 than for 1976.
The NYS provided the database for a number of important substantive and methodological studies in criminology—a March 2010 search of Criminal Justice Abstracts,1 using the search term “national youth survey,” resulted in 177 entries, 137 of which were journal articles. We will discuss some of these studies in more detail in subsequent sections of this chapter, but here we mention a few to provide a sense of the range of topics that can be addressed by NYS data. Several studies have focused on gender, race, and social class similarities and differences in self-reported offending (e.g., Ageton, 1983; Huizinga & Elliott, 1987; Smith, Visher, & Jarjoura, 1991; Zhang & Messner, 2000). Some researchers have examined the relationship between drug use and involvement in predatory crime or juvenile involvement in violent crime (e.g., Chaiken & Chaiken, 1990; Elliott et al., 1986) or used NYS data to test the gateway drug theory (Rebellon & Van Gundy, 2006). Still others have used NYS data to test explanatory theories of delinquent and criminal behavior, including deterrence, strain, power-control, and control balance theories, among others (e.g., Blackwell & Reed, 2003; DeLisi & Hochstetler, 2002; Heimer & Matsueda, 1994; Jang 1999a, 1999b; Jang & Johnson, 2001; Lauritsen, 1999; Ostrowsky & Messner, 2005; Pogarsky, Kim, & Paternoster, 2005). Researchers have also used NYS data to examine the relationship between religiosity, moral beliefs, and delinquency (Desmond, Soper, & Purpura, 2009) and between marriage and involvement in crime (King, Massoglia, & MacMillan, 2007).
National Longitudinal Study of Adolescent Health (Add Health)
The Add Health study, initiated in 1994 under a grant from the National Institute of Child Health and Human Development and administered by the University of North Carolina’s Carolina Population Center, is a nationally representative longitudinal study that was originally designed to collect data on how social contexts (families, friends, peers, schools, neighborhoods, and communities) influence teenagers’ health and risk behaviors (National Institutes of Health, n.d.). Among the data collected in the Add Health study are suicidal intentions or thoughts, biomarkers, substance use and abuse, violence, delinquency, criminal offending, and involvement with the juvenile and criminal justice systems (Carolina Population Center, 2010). In addition, school administrators provided information regarding characteristics of the schools that respondents attended and, if participants agreed, data from high school transcripts.
Add Health has gone through four waves, with the first study involving a stratified, random sample of all high schools in the United States, administered in 1994/1995, and resulting in 90,118 school questionnaires, 164 school administrator questionnaires, 20,745 in-home interviews of adolescents, and 17,700 parent questionnaires. For the second stage of Wave I, an in-home sample of 27,000 adolescents was drawn, consisting of a core sample from each community in addition to selected oversamples. In this stage, parents were asked to complete a questionnaire about family and relationships. The second wave of Add Health was conducted in 1996 and consisted of close to 15,000 in-home interviews with adolescents and 128 school administrator questionnaires. Wave III of the study consisted of Wave I respondents who could be located and re-interviewed in July of 2001 and April of 2002, resulting in 15,197 young adult in-home interviews (and the collection of biomarker data) as well as 1,507 interviews with partners of the original respondents. Finally, Wave IV of Add Health, conducted in April and June of 2007 and January and February of 2009, consisted of 15,701 adult in-home interviews (of the original respondents, who were then between the ages of 24 and 32) and biomarker collection (Carolina Population Center, 2010).
A March 2010 search of Criminal Justice Abstracts using “Add Health” as the search term resulted in 18 entries, with 16 of these being journal articles. The Carolina Population Center website lists several hundred more publications that have used Add Health data, and the National Institute of Health’s website indicates that more than 3,000 scientists have used data from Waves I through III, resulting in the publication of more than 600 articles (National Institutes of Health, n.d.).
Criminological researchers have used Add Health data to study the relationship between family structure, family process, and economic factors and delinquency (Lieber, Mack, & Featherstone, 2009); the role of friendship sex composition in girls’ and boys’ involvement in serious violence (Haynie, Steffensmeier, & Bell, 2007); the impact of early puberty on experiencing violent victimization (Schreck, Burek, & Stewart, 2007); the role of social psychological processes in mediating the impact of neighborhood contexts on violence (Kaufman, 2005); and the impact of school and family attachments on drug use, delinquency, and violent behavior (Dornbusch, Erickson, Laird, & Wong, 2001). Others have taken advantage of some of the unique features of the Add Health data to address issues of criminological interest. As noted previously, the Add Health studies have collected extensive information on the parents of those surveyed—Foster and Hagan (2007) used these data to examine the effects of fathers’ incarceration on the detainment and exclusion of children during their transition to adulthood. Beaver, DeLisi, and Wright (2009) used the biomarker data in Add Health and concluded that genetic factors interact with delinquent peers and low self-control in predicting variation in delinquency.
Monitoring the Future: A Continuing Study of U.S. Youth
Since 1975, the MTF study has served as a primary source of information about illicit drug, alcohol, and tobacco use by young people in the United States (Johnston, O’Malley, & Bachman, 1999). Each year, published reports based on MTF data reveal the extent of use of several legal and illegal substances. The study also examines a variety of attitudes among 8th-, 10th-, and 12th-grade students, but it does not address involvement in other criminal and delinquent activities (MTF data are available online at http://www.monitoring thefuture.org).
MTF is an extraordinarily ambitious and costly project—between 15,000 and 20,000 students in each of three grades, in addition to between 9,000 and 16,000 college students and young adults, complete an MTF questionnaire each year. The data from any given MTF survey are directly comparable to those from previous years, largely because sampling techniques and question formats are consistent from one year to the next.
MTF began with a cross-sectional survey of a representative sample of all seniors in public and private high schools in the coterminous United States (Johnston et al., 1999), but it quickly became a longitudinal survey. With the exception of the first graduating class, follow-up questionnaires are mailed to a representative sample, consisting of approximately 2,400 individuals, of the members of each senior class who participated in the MTF. These follow-ups occur on seven occasions between the year of high school graduation and the year that the cohort reaches the age of 32, and they constitute the college student and young adult samples for each MTF survey year.
The MTF survey instrument has been modified over the years to accommodate the use of different types of drugs as well as corollary attitudes and behaviors. For example, a question on crack cocaine was first added to the instrument in 1986, and more detailed questions about all forms of cocaine were included in the 1987 version. Questions on crystal methamphetamine (ice) have been included since 1990; 8th, 10th, and 12th graders have been asked questions about MDMA (ecstasy) since 1996. Since 2007, MTF has placed emphasis on the use of prescription drugs (outside of medical supervision) and on the use of over-the-counter cough and cold medicines to get high. In addition to typical questions about licit and illicit drugs, such as age or grade at first use, frequency and quantity of use, and perceived availability of drugs, the MTF also queries respondents regarding their attitudes and beliefs about involvement in risky behaviors as well as their perceptions of the attitudes, beliefs, and behaviors of others with whom they associate.
The 2008 MTF survey included more than 46,000 students in 8th, 10th, and 12th grade in 386 secondary schools in the United States (Johnston, O’Malley, Bachman, & Schulenberg, 2009).Exhibit 4.4 shows the percentages of each MTF school sample group that reported having used various illicit drugs, alcohol, and tobacco at any time in the 30 days prior to completing the questionnaire in 2008. This table reveals that alcohol is the drug most frequently used by young people, with 43% of 12th graders, 29% of 10th graders, and 16% of 8th graders reporting they had consumed alcohol
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.
