Replication Crisis
Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19(6), 975-991.
This test is applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias.
Kyle
Karen
What is Replication Crisis in Psychological terms?
The growing belief that results of many scientific research published across the scientific and psychological society can’t be reproduced and will therefore be quite likely to be wrong.
Emma
pros behind replication crisis
Replication across studies in scientific research is important because it allows researchers to progress in topic areas with more modern approaches. It also allows research to be more credible and reliable for researchers as well as students, etc.
Causes of the replication crisis.
Lack of detail or research conducted in specific topic area. This means researchers might have poorly designed experiments or research that lacks credibility or scientific links that will be used to make research better for when it has been published
Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9(6), 641-651.
Computer errors (e.g. type 1 & type 2 errors) that may change final results of research which will answer the set hypothesis differently
Methodological errors. This could mean a range of different problems such as lack of respectful and or logical materials, as well as bad samples or human error during the procedure of studies. It is all possible and affects the final results that get results that might be different from an original study that is being replicated by researchers
Jake
The importance of replication in research
It is important for scientists or psychologists, etc. To check their work, whether this is for scientific results or others. Without this, researchers may have issues in finding changes to better and improve what it is they are researching.
Anderson, S. F., & Maxwell, S. E. (2017). Addressing the “replication crisis”: Using original studies to design replication studies with appropriate statistical power. Multivariate Behavioral Research, 52(3), 305-324.
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
click to edit
click to edit
Individuals get published
Cons behind replication crisis
could be cost effective to publish an article
The lack of replication in psychology is systemic and widespread, and particularly the bias against publishing direct replications. In their survey of social science journal editors, Neuliep & Crandall [42] found almost three quarters preferred to publish novel findings rather than replications. In a parallel survey of reviewers for social science journals, Neuliep & Crandall [43] found over half (54 %) stated a preference for new findings over replications. Indeed, reviewers stated that replications were “Not newsworthy” or even a “Waste of space”. By contrast, comments from natural science journal editors present a more varied picture, with comments ranging from “Replication without some novelty is not accepted” to “Replication is rarely an issue for us…since we publish them.”
takes time to read through publications
stats behind men and women who get published and who gets published more and the reasons behind why etc
Chloe
Morawski, J. (2019). The replication crisis: How might philosophy and theory of psychology be of use?. Journal of Theoretical and Philosophical Psychology, 39(4), 218.
In response and with impressive speed, technical changes are being introduced to remedy perceived problems in data analysis, researcher bias, and publication practices. Yet throughout these large-scale renovations of scientific practice, scarce attention is given to philosophical and theoretical commitments as potential factors in the crisis problems. Analysis of involved psychologists’ understandings of scientific crisis, replication, and epistemology indicates the need for philosophical examinations. Likewise warranting close analyses are the associated assumptions about objectivity, credibility, and ontology
Hawkins, R. X., Smith, E. N., Au, C., Arias, J. M., Catapano, R., Hermann, E., ... & Frank, M. C. (2018). Improving the replicability of psychological science through pedagogy. Advances in Methods and Practices in Psychological Science, 1(1), 7-18.
Replications are important to science, but who will do them? One proposal is that students can conduct replications as part of their training. As a proof of concept for this idea, here we report a series of 11 preregistered replications of findings from the 2015 volume of Psychological Science, all conducted as part of a graduate-level course. As was expected given larger, more systematic prior efforts, the replications typically yielded effects that were smaller than the original ones: The modal outcome was partial support for the original claim.
However, according to a recent survey by Gundersen (see Hutson 2018), this call is largely ignored as only 6% of the 400 algorithms presented at two top AI conferences in the past few years contained the code and only a third had pseudocode, or simplified summaries of the code. Furthermore, Stodden et al. (2018) have recently investigated the effectiveness of a replicational policy adopted by Science in 2011. Since then, the journal requires authors to make the data and code sufficient to replicate their study available to other researchers upon request. Stodden and colleagues selected 204 computational studies published in Science. Out of those, 24 papers (about 12%) provided code and data via external links or supplementary material. Stodden and colleagues contacted the authors of the remaining 180 studies. To start with, 26% of the authors failed to reply altogether while the others often responded evasively—e.g., by asking for reasons, making unfulfilled promises or directing the researchers back to supplementary material. In the end, it was possible to obtain artifacts for only 36% of the papers. Overall, Stodden and colleagues estimated about 25% of the models to be replicable. Their investigation has shown that the requirement to share data on demand after publishing is not being followed. Until recently the policy of Nature journals was similar. Adopted in 2014, the policy demanded that authors explicitly express readiness to share the code and data (“Does your code stand up to scrutiny?” 2018).
Eric R. Louderback, Michael J. A. Wohl, Debi A. LaPlante. (2020) Integrating open science practices into recommendations for accepting gambling industry research funding. Addiction Research & Theory 0:0, pages 1-9.
Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual review of psychology, 69, 487-510.
Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean?. American Psychologist, 70(6), 487.
click to edit
https://www.apa.org/ed/precollege/psn/2020/03/replication-crisis
Article posted by the American Psychological Association indicates that through leaning into the replication crisis, we can better the field as a whole: "In response to the replication crisis, more individuals have been embracing the movement of transparency in research. The Open Science Foundation (OSF) and the Society for Improving Psychological Science (SIPS) have created opportunities for researchers to brainstorm means of strengthening research practices and provide avenues to share replication results. Based on these changes, I would argue the issue of replication was not a crisis, but an awakening for researchers who had become complacent to the consequences of the toxic elements of the research culture."
Questions raised by the replication crisis
Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70(6), 487–498
"Questions then arise about whether the first study results were false positives, and whether the replication study correctly indicates that there is truly no effect after all."
Do we consider that we may get false positives enough, within the first publication of a study ? Even though we are taught to critically evaluate; does our confirmation bias (or something else) cloud our judgement ?
click to edit
Shrout, P. E., & Rodgers, J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual review of psychology, 69, 487-510.
"We recommend that researchers adopt open science conventions of preregi-stration and full disclosure and that replication efforts be based on multiple studies rather than on a single replication attempt."
Should we be more applied in replicating multiple studies instead of just one ?
Miłkowski, M., Hensel, W. M., & Hohol, M. (2018). Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail. Journal of computational neuroscience, 45(3), 163–172. https://doi.org/10.1007/s10827-018-0702-z
Hutson, M. (2018). Artificial intelligence faces reproducibility crisis. Science, 359(6377), 725-726. doi: 10.1126/science.359.6377.725
Colling, L., & Szűcs, D. (2018). Statistical Inference and the Replication Crisis. Review Of Philosophy And Psychology. doi: 10.1007/s13164-018-0421-4
The first condition under which p values are able to provide information on which to base inferences is that if the null hypothesis is true then p values should be uniformly distributed.Footnote2 For instance, if one was to repeatedly draw samples from a standard normal distribution centred on 0, and after each sample test the null hypothesis that μ = 0 (for example, by using a one sample t-test) one would obtain a distribution of p values approximately like the one shown in Fig. 1(a). This fact appears to contradict at least one common misinterpretation of p values, specifically the expectation that routinely obtaining high p values should be common when the null hypothesis is true—for instance the belief that obtaining p > .90 should be common when the null is true and p < .10 should be rare, when in fact they will occur with equal frequency (see Nickerson 2000 for common misinterpretations of p values). Herein lies the concept of the significance threshold. While, for instance, p ≈ .87, and p ≈ .02 will occur with equal frequency if the null is true, p values less than the threshold (defined as α) will only occur with the frequency defined by that threshold. Provided this condition is met, this sets an upper bound on how often one will incorrectly infer the presence of an effect when in fact the null is true.
How often do direct and conceptual replications occur in psychology? Screening 100 of the most-cited psychology journals since 1900, Makel, Plucker & Hegarty [40] found that approximately 1.6 % of all psychology articles used the term replication in the text. A further more detailed analysis of 500 randomly selected articles revealed that only 68 % using the term replication were actual replications. They calculated an overall replication rate of 1.07 % and Makel et al. [40] found that only 18 % of those were direct rather than conceptual replications.
Laws K. R. (2016). Psychology, replication & beyond. BMC psychology, 4(1), 30. https://doi.org/10.1186/s40359-016-0135-2
Laws K. R. (2016). Psychology, replication & beyond. BMC psychology, 4(1), 30. https://doi.org/10.1186/s40359-016-0135-2