Two primary goals of psychological science should be to understand what aspects of human psychology are universal and the way that context and culture produce variability.
ABSTRACT
Two primary goals of psychological science should be to understand what aspects of human psychology are universal and the way that context and culture produce variability. This requires that we take into account the importance of culture and context in the way that we write our papers and in the types of populations that we sample. However, most research published in our leading journals has relied on sampling WEIRD (Western, educated, industrialized, rich, and democratic) populations. One might expect that our scholarly work and editorial choices would by now reflect the knowledge that Western populations may not be representative of humans generally with respect to any given psychological phenomenon. However, as we show here, almost all research published by one of our leading journals, Psychological Science, relies on Western samples and uses these data in an unreflective way to make inferences about humans in general. To take us forward, we offer a set of concrete proposals for authors, journal editors, and reviewers that may lead to a psychological science that is more representative of the human condition.
Keywords: psychological science, diversity, methodology, culture, cognition
We begin this paper with the observation that two core goals in psychological science should be to understand human universals and the way in which context and culture produce variability. It is impossible to isolate universals without investigating variability. However, in 2008, Arnett (1) showed that 68% of studies in six top American Psychological Association journals relied on samples drawn from the United States and 96% relied on samples drawn from Western industrialized nations (Europe, North America, Australia, or Israel). Moreover, in studies carried out within Western countries, researchers tend to sample participants in a systematically biased manner. For example, samples in the United States predominantly sampled European Americans. This means that 96% of these studies attempted to build theory based on empirical observations from participants who come from countries representing a mere 12% of the world’s population.
We know that our theories are built on studying a small slice of humanity, and we also know that this slice is in many ways not representative of the whole. In an important review, Henrich et al. (2) showed that people from what they call WEIRD (Western, educated, industrialized, rich, and democratic) populations are outliers on many measurable psychological phenomena for which data are available, including those in such domains as visual perception, spatial reasoning, categorization, inferential induction, moral reasoning, and self-concepts. In one respect, the WEIRD paper (2) had a particularly large impact. Its claims were regarded as important, were generally not disputed (but see ref. 3), and the paper is highly cited. However, in terms of changing practices, its impact may have been minimal (4).
Without doubt, it is important that our research is appropriately powered, distinguishes between exploratory and confirmatory analyses, and uses appropriate analytical techniques. However, even the most perfect methods will not yield much if we mainly gather data from such a narrow slice of humanity. Overreliance on sampling a small and unrepresentative population constitutes a barrier in documenting universals in human psychology, in understanding how culture and context influence variability, and in building meaningful theory to address key scientific and social issues.
In this paper, we investigate the extent to which psychological science has responded to this problem, as illustrated by Arnett (1), Henrich et al. (2), and others, by analyzing papers published in a leading multidisciplinary journal, Psychological Science, in 2014 and 2017. We chose Psychological Science because of its prominence within psychology and also because this journal has arguably been a leader in its focus on improving the reproducibility of our science. Our paper deals with a different but related topic as concerns about diversity are, at their core, concerns about producing generalizable knowledge. The 2014 data were collected 6 y after publication of the paper by Arnett (1) and 4 y after publication of the paper by Henrich et al. (2). The 2017 data were collected 3 y after Psychological Science changed its practices in reporting and data analysis, with the goal of increasing replicability of findings. While recent work has shown the persistence of WEIRD samples in subdisciplines (5), we were interested in the persistence of WEIRD samples across the discipline as a whole.
We explored two questions. First, we asked whether psychological scientists have responded to illustrations of the WEIRD problem by diversifying their sampling and being less reliant on WEIRD populations (e.g., refs. 1, 6). In other words, we asked to what degree does the field show an understanding that human psychology cannot rely on studies that sample WEIRD populations. Second, going beyond prior work that has identified the problem of overreliance on WEIRD samples and WEIRD scholars (1, 2, 5, 7, 8), we were interested in whether scholars sampling WEIRD populations showed an awareness of the importance of culture and context in influencing the generalizability of their empirical and theoretical observations. In particular, as psychological science has begun to pay more attention to issues that might influence replicability, we were interested in whether scholars have begun to pay more attention to the role of cultural context in influencing generalizability of findings. Discussion of how we choose our samples and how we should report them is likely to produce more generalizable research (9), facilitate integrated data analysis (10), and enhance reproducibility of our findings (11), as well as to possibly diversify our researchers (8).
In our first study, we analyzed all empirical articles published in Psychological Science in the year 2014. Overall, this analysis covered a total of 286 articles. We excluded commentaries, rejoinders, review articles, and studies involving nonhuman subjects from our analysis, leaving a total of 223 original research articles. If an article included multiple studies, each study was coded separately, yielding 428 individual studies. Following Arnett (1), studies that included samples from more than one country were coded as multiple studies, leaving a total of 450 samples for coding. In addition to geographical origin, we measured the WEIRDness of each sample by coding the sample’s education, socioeconomic status (SES)/income, race/ethnicity, and gender, as well as methods of recruitment and compensation. We also analyzed and coded the content of each article to determine how it dealt with its sample(s). Namely, we looked at presentation of sample characteristics in the abstract and whether, in their conclusions, the authors generalized their arguments to the entire human population. Moreover, we examined whether sample demographics were entered in reported analyses, whether authors discussed limitations of their samples, and if they offered a thoughtful avenue for future research to address those limitations (details of coding are provided in Methods).
The first thing to note is that 51 studies (11.41%) did not include any information that allowed us to clearly code from which nation or region participants were drawn (see Table 1). While it is possible to guess from the papers that the overwhelming majority were collected from English- speaking countries, we simply note that the lack of information demonstrates the scope of the problem we are addressing. Of the remainder, 57.76% were drawn from the United States, 71.25% were drawn from English-speaking countries (including the United States), and 94.15% of studies sampled Western countries (including English-speaking countries, Europe, and Israel).
Table 1.
National location of samples published in Psychological Science in 2014
Region United States
Non-US English- speaking countries
Europe Israel Asia Africa Latin America
Unknown
No. of samples
227 (50.8%)
53 (11.9%) 70 (15.7%)
20 (4.4%)
17 (3.9%)
6 (1.3%)
3 (0.6%) 51 (11.4%)
Further analysis comparing sample characteristics across regions revealed fairly homogeneous samples across national borders. In most regions, the majority of samples were collected offline and participants were young (adult) students of both genders, who participated for a fixed fee (SI Appendix, Table S1). The reliance on undergraduate students for psychological research continues to persist, albeit at a reduced rate that reflects the growing reliance on online samples. Twenty percent of American samples published in Psychological Science in 2014 used undergraduates, compared with 67% of samples in the Journal of Personality and Social Psychology in 2007 (1). The percentage of undergraduates in non-American samples was higher, at 41%.
It is striking that we cannot say much about whether studies carried out with Western samples sampled diverse ethnic and religious groups or were reliant on educated participants from a European background. This is because the vast majority of papers give no information about their sample apart from gender (Fig. 1). This finding is reminiscent of Rozin’s analysis (6) of articles published in a volume of the Journal of Personality and Social Psychology in 1994, where social class, religion, and ethnicity of participants were typically unspecified.
Fig. 1.
Proportion of samples with demographic information reported in samples used in all studies published in Psychological Science in 2014.
Perhaps the most disturbing aspect of our analysis was the lack of information given about the WEIRDness of samples, and the lack of consideration given to issues of cultural diversity in bounding the conclusions (SI Appendix, Table S2). Over 72% of abstracts contained no information about the population sampled, 83% of studies did not report analysis of any effects of the diversity of their sample (e.g., gender effects), over 85% of studies neglected to discuss the possible effects of culture and context on their findings, and 84% failed to simply recommend studying the phenomena concerned in other cultures, implying that the results indicated something generalizable to humans outside specific cultural contexts. We note that there appear to be two groups of psychological scientists. When the cultural context of studies was mentioned, it tended to be discussed in a thoughtful manner. However, on the whole, issues of culture and context were ignored. Among the 51 studies that contained no information allowing us to clearly infer the nation in which a population was sampled, the results are particularly concerning (SI Appendix, Table S2).
We conducted a follow-up study in 2017, coding samples used in research published in the last three issues of Psychological Science (volume 28, issues 10–12). Here, we are investigating scholarly and publishing practices almost a decade after ref. 1, and 7 y after ref. 2). This analysis included 40 articles and a total of 94 studies (we again excluded commentaries, reviews, or studies that used nonhuman samples). Table 2 shows the regional origin of these samples.
Table 2.
National location of samples published in the last three issues of Psychological Science in 2017
Region United States
Non-US English- speaking countries
Europe Israel Asia Africa Latin America
Unknown
No. of samples
48 (51.1%)
9 (9.9%) 9 (9.9%)
3 (3.2%)
7 (6.6%)
0 (0%) 0 (0%) 22 (23.4%)
The “Unknown” group contains samples that were obtained from MTurk, but it was unclear whether the region was the United States, India, or other countries. It also contains some students (based on compensation data), but it is unclear which school or/and which region these students were from. One study used participants from both the United States and Germany, and it is coded twice in this table. One study had participants from China and Korea, and it is coded twice in this table.
Not taking into account the samples with unidentifiable origins, participants from the United States constituted over half of all samples (50.8% in 2014). Over 70% of samples are from North America (United States and Canada), Europe (United Kingdom, Germany, Spain, and France) and Australia. Samples from Asia (China, South Korea, and Japan) comprise less than 7% of samples, and Israel seems to be within the same range as it was in 2014 (3∼4%). Not a single study sampled people from Africa, the Middle East, or Latin America. In sum, the results were similar to those of the first study. Based on the information available, almost 85% of the world population comprises less than 7% of samples in the last three issues of Psychological Science published in 2017.
As in the data from 2014, most studies in the last three issues of 2017 report gender breakdown of their samples (83%). However, with respect to race and ethnicity, in the studies from the United States (n = 48), only 12 include any relevant information about the ethnicity of participants. Over 91% of studies do not give any data about their participants’ SES, and around 60% lack information about participant employment and education. The use of online samples remains high (64%), and the university student samples comprise a quarter of all samples (compare with 30% in 2014).
In terms of content, only 10% of articles made any allusions to their samples in their abstracts. In a generous coding, less than 20% of articles refer to the populations sampled in their discussion. When authors mentioned the population sampled, barely half go beyond proforma discussion and offer thoughtful comments on possible cultural and contextual moderators. Overall, this snapshot of the latest publications in Psychological Science suggests the pattern observed in the comprehensive study of samples from 2014 persists 3 y later.
The problem of the lack of cultural diversity in psychological science is well established. However, with notable exceptions (12), there has been little action in response. Our analysis demonstrates what a cursory look at our leading journals would suggest: Despite powerful demonstrations of the importance of cultural diversity in human psychology, most papers in a leading psychology journal sample a very narrow cultural base and generalize inappropriately from that sample to humans more generally. If we agree that the science of psychology should aim to understand human cognition and behavior, and not simply give an empirical ethnography of WEIRD populations, something needs to be done. While prior work has made general policy suggestions that we build on (1, 2), these do not seem to have been sufficient to influence practice.
It is not clear why the demonstration of the problem of relying on WEIRD samples has not led to change. Indeed, a useful topic for future research would be to investigate the lay beliefs that psychological scientists use to justify their continued unreflective reliance on WEIRD samples, a reliance we seem reluctant to justify with formal argument. Here, we approach the problem of what to do by focusing on incentive structures. Our approach borrows from recent changes in many editorial policies that have been advanced to enhance the reproducibility of our science. We believe similar efforts are required to ensure that psychological science is the study of Homo sapiens and meets the goal of charting and explaining human variability and universals in cognition and behavior. Below, we suggest specific and modest changes in editorial policies to increase the accuracy of our reporting and to create incentives to encourage diversity. We note that while we believe these guidelines are best practices and will improve our science, we have been guilty of ignoring many of them in the past. We divide these guidelines into two sections: requirements of authors and requirements of editors and reviewers.
FOR AUTHORS
Required Reporting of Sample Characteristics. At the moment, most studies report the gender breakdown of their sample but little else. Many fail to disclose the country research took place in, and it seems rare to discuss how wealthy or educated their participants are. We recommend that authors should be required to report a number of other characteristics of their sample, including age, SES, ethnicity, religion, and nationality. If this is not possible, authors should acknowledge this and signify that a variable has missing values or data are inapplicable.
Explicitly Tie Findings to Populations. One of the first things we learn in research methods is that we should only generalize to the population from which our participants are sampled. We think it is uncontroversial to require that the abstracts and conclusions of manuscripts be written in a way that clearly link conclusions to the populations sampled. Currently, the only papers that seem to do so are those that concern themselves with cross-cultural or developmental work. Papers that report, for example, the effects of power on psychological and social functioning with a sample of undergraduates in the United States, tend to make abstract conclusions about the general influences of power on human cognition and behavior. As a thought experiment, imagine the following. Instead of an abstract that reads “we find that X causes Y,” the abstract reads “we find that X causes Y in a sample of MTurk participants in the United States.” The latter formulation has the benefit of being accurate. It also tempers false conclusions and makes transparent the true novelty and interest of the paper. We believe it also encourages research from other cultures and contexts. If it is of interest to know what factors are associated with romantic attraction in the United States, it is obviously interesting to know markers of friendship in Indonesia.
Justify the Sampled Population. Authors should justify their choice to sample a certain population. In the same way that we now (correctly) ask authors to justify their sample size, we should also ask them to justify the population they choose to sample. We think it is fine to answer that the authors chose the most convenient sample to conduct an initial test of their theory. Indeed, this is often the most sensible thing to do (2), and an educated student sample might be a theoretically interesting and important population with which to test some theories (13).
Discuss Generalizability of the Finding. Authors should discuss the theoretical implications of their sample, including an informed discussion of likely effects of culture and context on the generalizability of their findings (9). Thoughtful discussion of the impact of how culture and context might influence the phenomena in question could encourage an important stream of empirical and theoretical work. Moreover, it ensures clear thinking regarding the generalizability of findings. Our approach here is the opposite of Simons et al. (9), who suggest authors should be asked to include statements regarding constraints on the generality of their findings. Rather than beginning with the premise that a finding is generalizable across different cultural contexts, we think it is more appropriate to begin by tying a finding to the population sampled, and then discussing the way in which the phenomena in question may or may not generalize.
Analytical Investigation of Existing Diversity. While most studies report a gender breakdown, few report analyses of whether findings are moderated by gender. Along with a fuller and more transparent report of the characteristics of samples, we can use whatever diversity that exists within a sample to investigate the impact of cultural diversity. This does not detract from the need to study non-WEIRD samples, but it is a modest advance on current practices.
FOR EDITORS AND REVIEWERS
Non-WEIRD = Novel and Important. Journal editors should instruct reviewers to treat non- WEIRDness as a marker of the interest and importance of a paper. Generally, reviewers and editors consider the importance, novelty, and interest of manuscripts when making publication decisions. Given the state of the field, we argue that the diversity of samples should be considered a formal contributing factor to how interesting a paper is, along with its theoretical contribution and empirical novelty.
Diversity Badges. Journals are beginning to introduce badges to encourage good methodological research practices. The same should be done to create incentives to sample more diverse populations. To that end, journals could introduce badges to indicate that a manuscript has sampled a population that varies from WEIRD populations on one or more dimensions. A paper that samples a non-Western but educated sample from an industrialized, rich, and democratic society would receive one badge. A paper that includes a study that samples a non-Western population living in a nonindustrialized and nonrich community might receive three diversity badges. We note elsewhere that we do not believe all psychological scientists need to become cross-cultural researchers. However, diversity is not always difficult. A diversity badge could result from sampling low-income, immigrant, or indigenous populations within a few miles of the university.
Diversity Targets. We think it reasonable to suggest the goal of at least 50% of papers sampling populations that deviate from WEIRD populations in at least one dimension. Some may argue that this is low, and that a good goal would be 80%. Setting a clear target is a way of countering implicit biases and current incentive structures. If Psychological Science were to announce that by 2022, half of its papers would include studies sampling at least one non-WEIRD population, it would influence editors, reviewers, and scientists to change their practices to help meet or take advantage of this goal. We recognize that this may be the most controversial of our recommendations. However, we think it is no different than setting diversity goals in hiring practices in the workplace. Our science will be better if our scientists come from more diverse cultural backgrounds, and if we sample more diverse populations.
This paper has demonstrated that the reliance of sampling WEIRD populations has persisted in psychological science. Moreover, we have shown how our science seems to ignore the problem and to persist with the use of WEIRD samples in a mostly nonreflective manner. To deal with the problem, we suggest modest changes in how authors write their results, and the way in which editors and reviewers treat the submission of manuscripts. Broadly, we suggest that rather than beginning with the assumption that work in WEIRD populations has uncovered psychological phenomena generalizable to humans, we should begin by linking our findings to the populations sampled, and then make theoretically thoughtful and explicit claims about generalizability and variability across contexts.
We conclude with two thoughts. We do not wish to consider this paper a scold on scholars who utilize student and online samples. Some of the best psychological science has done so, and we use such samples ourselves. Instead, it is to note that if the field, as a whole, focuses its efforts on sampling a narrow slice of humanity, the conclusions we draw will be accordingly narrow. This narrowness prevents us from examining key theoretical puzzles that we believe should motivate more of our science: What are human universals, and how do context and culture influence variability in different domains of human cognition and behavior? At the moment, we run the risk of knowing more, and with greater certainty, about the psychology of a small group of humans. Second, the problem is not simply one of narrow samples but also the lack of diversity of scholars running studies. The response to a lack of diversity cannot be just to encourage scholars from Western industrialized societies to go and study other cultures. This would be a positive thing, but not sufficient to solve our problem. A diverse science must include a diverse group of scientists (4, 14–16), who will be interested in asking different, perhaps non- WEIRD, questions. The problem as we see it is this: How can we create incentives to increase the diversity of our science in a way that will enhance the ability of our science to address important scientific problems in understanding the psychology of humans? We hope that this article and its recommendations will help move us in the right direction.
METHODS
We discuss our coding choices in the first study here as we used the same methods in the second study. Our analysis excluded commentaries, rejoinders, review articles, and studies involving nonhuman subjects, leaving a total of 223 original research articles as reported by Pitesa and Thau (17). If an article included multiple studies, each study was coded separately, yielding 428 individual studies. Following the procedure of Arnett (1), studies that included samples from more than one country were coded as multiple studies, leaving a total of 450 samples for coding.
The national location of each sample was coded using the same procedure as Arnett (1). Codes were grouped by region: Europe, Asia, Latin America, Africa, and the Middle East. The United States was a separate category to evaluate whether American samples still dominate psychological research. There was also a category of “English-speaking countries,” which was developed by Arnett (1) to represent a group of countries with strong cultural and historical ties to the United States: the United Kingdom, Canada, Australia, and New Zealand. Israel was also coded separately.
In addition to evaluating the national location of our samples, we coded for several other sample characteristics. In this way, we hoped not only to capture the WEIRDness of a sample based upon its geographical location but also to investigate how different those who become psychology subjects are in contrast to the WEIRD population they are drawn from. Therefore, each sample was additionally coded for sample size, age, nationality, online/offline participation, compensation received, education level, income/SES, race/ethnicity, and gender. While coding, it became clear that most studies (91.12%) did not include information about the income/SES of their samples; therefore, this variable was recoded as either information available or not available.
Our analysis of each article was not confined to the characteristics of the sample used but also evaluated whether the authors discussed the limitations of their samples, such as the potential cultural boundedness of their subsequent findings. Therefore, we performed a content analysis of the abstract, results, and discussion sections of each article. We coded the abstract for whether information about the sample was described in a detailed way, described in a basic way, or not reported at all. Detailed information comprised reporting participant demographics, such as gender, age, race, nationality, or occupation of the participants. Content analysis of the results section aimed to determine whether sample diversity (e.g., age, gender, race) was used in the data analysis, in any form. This included using demographics as covariates, comparing different groups, or if the authors mentioned that the results did not differ based on this diversity. Next, we assessed the authors’ conclusions in the discussion section as to whether they generalized their findings to the population samples (coded “specific”) or assumed that that findings were generalizable to all humans. We were generous in our coding, and coded as specific any attempt to tie a finding to a specific population. For example, the following conclusion was considered specific as it was limited to children: “This robust relationship [. . .] provides strong evidence that young children can access and track an internal estimate of their uncertainty” (18). This is a conservative coding because not mentioning cultural context (as this paper fails to do) ignores its importance in child development. We mention this to demonstrate that we tended to err on the side of caution in our coding. If anything, our results will overestimate attention to diversity in this literature.
We also coded for whether authors discussed limitations of their sample(s), and in what form (absent, proforma, or detailed). Discussions were coded as proforma if they were generic and did not consider how sampling limitations could affect the results and/or conclusions: For example, “The sample is largely Caucasian and middle or upper-middle class and is composed of heterosexual married couples only. Generalization to other groups requires further research.” (19). Incidentally, this example would also be coded as providing a recommendation for further work, our last aspect of content analysis.
Three papers, which were excluded from our analyses, warrant discussion. These studies used massive international databases to collect data from participants in 158, 56, and 57 nations, respectively (17, 20, 21). Their authors must be applauded for conducting such impressive cross- cultural work, but because our unit of analysis was studies and not papers, including these in our analysis would artificially boost the number of samples collected from underrepresented regions; thus, these papers were excluded from further analyses.
SUPPLEMENTARY MATERIAL
Supplementary File
Click here to view.
ACKNOWLEDGMENTS
This research was supported by funding from the National Science Foundation (Grant SES- 0962080).
FOOTNOTES
The authors declare no conflict of interest.
This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “Pressing Questions in the Study of Psychological and Behavioral Diversity,” held September 7–9, 2017, at the Arnold and Mabel Beckman Center of the National Academies of Sciences and Engineering in Irvine, CA. The complete program and video recordings of most presentations are available on the NAS website at www.nasonline.org/pressing-questions-in-diversity.
This article is a PNAS Direct Submission.
Data deposition: The data reported in this paper are available through Open Science Framework (https://osf.io/t2r87).
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1721165115/-/DCSupplemental.
(143K, pdf)
REFERENCES
1. Arnett JJ. The neglected 95%: Why American psychology needs to become less American. Am Psychol. 2008;63:602–614. [PubMed] [Google Scholar]
2. Henrich J, Heine SJ, Norenzayan A. The weirdest people in the world? Behav Brain Sci. 2010;33:61–83, discussion 83–135. [PubMed] [Google Scholar]
3. Gaertner L, Sedikides C, Cai H, Brown JD. It’s not WEIRD, it’s WRONG: When Researchers Overlook uNderlying Genotypes, they will not detect universal processes. Behav Brain Sci. 2010;33:93–94. [PubMed] [Google Scholar]
4. Medin D, Ojalehto B, Marin A, Bang M. Systems of (non-)diversity. Nat Hum Behav. 2017;1:1–5. [Google Scholar]
5. Nielsen M, Haun D, Kärtner J, Legare CH. The persistent sampling bias in developmental psychology: A call to action. J Exp Child Psychol. 2017;162:31–38. [PubMed] [Google Scholar]
6. Rozin P. Social psychology and science: Some lessons from Solomon Asch. Pers Soc Psychol Rev. 2001;5:2–14. [Google Scholar]
7. Medin DL, Atran S. The native mind: Biological categorization and reasoning in development and across cultures. Psychol Rev. 2004;111:960–983. [PubMed] [Google Scholar]
8. Cheek NN. Scholarly merit in a global context: The nation gap in psychological science. Perspect Psychol Sci. 2017;12:1133–1137. [PubMed] [Google Scholar]
9. Simons DJ, Shoda Y, Lindsay DS. Constraints on generality (COG): A proposed addition to all empirical papers. Perspect Psychol Sci. 2017;12:1123–1128. [PubMed] [Google Scholar]
10. Hofer SM, Piccinin AM. Integrative data analysis through coordination of measurement and analysis protocol across independent longitudinal studies. Psychol Methods. 2009;14:150–164. [PMC free article] [PubMed] [Google Scholar]
11. Greenfield PM. Cultural change over time: Why replicability should not be the gold standard in psychological science. Perspect Psychol Sci. 2017;12:762–771. [PubMed] [Google Scholar]
12. Kitayama S. Journal of personality and social psychology: Attitudes and social cognition. J Pers Soc Psychol. 2017;112:357–360. [PubMed] [Google Scholar]
13. Gächter S. (Dis)advantages of student subjects: What is your research question? Behav Brain Sci. 2010;33:92–93. [PubMed] [Google Scholar]
14. Baumard N, Sperber D. Weird people, yes, but also weird experiments. Behav Brain Sci. 2010;33:84–85. [PubMed] [Google Scholar] 8/10/23, 5:23 PM Page 13 of 14
15. Medin D, Bennis W, Chandler M. Culture and the home-field disadvantage. Perspect Psychol Sci. 2010;5:708– 713. [PubMed] [Google Scholar]
16. Meadon M, Spurrett D. 2010. It’s not just the subjects–There are too many WEIRD researchers. Behav Brain Sci 33, 104–105.
17. Pitesa M, Thau S. A lack of material resources causes harsher moral judgments. Psychol Sci. 2014;25:702–710. [PubMed] [Google Scholar]
18. Vo VA, Li R, Kornell N, Pouget A, Cantlon JF. Young children bet on their numerical skills: Metacognition in the numerical domain. Psychol Sci. 2014;25:1712–1721. [PMC free article] [PubMed] [Google Scholar]
19. Uchino BN, Smith TW, Berg CA. Spousal relationship quality and cardiovascular risk: Dyadic perceptions of relationship ambivalence are associated with coronary-artery calcification. Psychol Sci. 2014;25:1037–1042. [PMC free article] [PubMed] [Google Scholar]
20. Tay L, Morrison M, Diener E. Living among the affluent: Boon or bane? Psychol Sci. 2014;25:1235–1241. [PubMed] [Google Scholar]
21. Tucker-Drb EM, Cheung AK, Briley DA. Gross domestic product, science interest, and science achievement: A person × nation interaction. Psychol Sci. 2014;25:2047–2057. [PMC free article] [PubMed] [Google Scholar]
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice
Proc Natl Acad Sci U S A. 2018 Nov 6; 115(45): 11401–11405. Published online 2018 Nov 5. doi: 10.1073/pnas.1721165115
PMCID: PMC6233089 PMID: 30397114
Toward a psychology of Homo sapiens: Making psychological science more representative of the human population
Mostafa Salari Rad, Alison Jane Martingano, and Jeremy Ginges
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.