1 Automated Syllabus of Metascience Papers

Built by Rex W. Douglass @RexDouglass ; Github ; LinkedIn

Papers curated by hand, summaries and taxonomy written by LLMs.

Submit paper to add for review

2 Metascience

2.1 Publication Bias

  • Consider adopting Registered Reports (RRs) as a publication format, where peer review and the decision to publish occur before results are known, in order to reduce publication bias and increase the credibility of findings. (Scheel, Schijen, and Lakens 2021)

  • Carefully consider and address the risk of selective reporting of nonsignificant results and reverse P-hacking, where researchers manipulate data or analyses to achieve nonsignificant results, as these practices can distort the scientific literature and lead to biased estimates of effect sizes. (Chuard et al. 2019)

  • Carefully consider the limitations of various publication bias detection methods, such as low statistical power and assumptions of homogeneous true effect size, and choose appropriate methods based on the specific context and characteristics of your meta-analysis. (Aert, Wicherts, and Assen 2019)

  • Carefully consider the potential impact of including unpublished and grey literature study data in meta-analyses, as these sources may introduce biases and affect the pooled effect estimates and overall interpretation of the results. (C. M. Schmucker et al. 2017)

  • Interpret mixed results (i.e., a combination of both significant and non-significant findings) as potentially providing strong evidence for the alternative hypothesis, especially when statistical power is high and Type I error rates are controlled. (Lakens and Etz 2017)

  • Exercise caution when interpreting the results of meta-analyses using p-uniform and p-curve methods, particularly when dealing with heterogeneous data sets, as these methods may produce erratic behavior, implausible estimates, or overestimate effect sizes. (Aert, Wicherts, and Assen 2016)

  • Consider using pre-analysis plans (PAPs) to specify hypotheses and analyses prior to data collection in order to minimize issues of data and specification mining and provide a record of the full set of planned analyses. (“Editorial Statement on Negative Findings” 2015)

  • Prioritize sharing negative results, even if they are not as highly valued in the current scientific culture, because they contribute to filling gaps in knowledge and moving towards unabridged science. (Matosin et al. 2014)

  • Base your power calculations on realistic external estimates of effect sizes, rather than on inflated estimates derived from preliminary data or arbitrary thresholds of minimal substantive importance. (Gelman and Carlin 2014)

  • Publish your protocols and statistical analysis plans before conducting clinical trials, and strictly adhere to them during analysis to avoid selective reporting biases caused by discrepancies in analyses between publications and other study documentation. (Dwan et al. 2014)

  • Consider the impact of dissemination bias when conducting systematic reviews, as a substantial portion of studies approved by RECs or included in trial registries remain unpublished, leading to selective reporting of results and potentially skewed conclusions. (C. Schmucker et al. 2014)

  • Prioritize the timely submission of your studies to peer-reviewed journals to prevent non-publication, which is often attributed to lack of time or low priority. (Fujian Song, Loke, and Hooper 2014)

  • Compare study protocols to final publications to detect and mitigate the impact of outcome reporting bias, which has been found to increase the likelihood of statistically significant results being fully reported by a factor of 2.2 to 4.7. (Dwan et al. 2013)

  • Carefully consider and address potential publication bias when interpreting the results of meta-analyses, particularly if statistically significant outcomes are overrepresented in the sample of studies included. (Kicinski 2013)

  • Proactively register your studies prospectively, search thoroughly for unpublished studies, and consider using statistical methods to assess and mitigate publication bias in your analyses. (F. Song et al. 2010)

  • Compare your final trial publications to your initial protocols to ensure accurate and comprehensive reporting of outcomes, as evidence indicates that statistically significant outcomes have higher odds of being fully reported compared to non-significant ones, leading to biased estimates of treatment effects. (Dwan et al. 2008)

  • Employ rigorous and transparent methods to identify, select, extract, and analyze data in order to minimize various forms of bias in systematic reviews, such as publication bias, language bias, and time-lag bias. (Tricco et al. 2008)

  • Be aware of and attempt to correct for potential biases leading to an excess of statistically significant findings in a body of evidence, such as publication bias, selective analyses, and selective outcome reporting. (J. P. Ioannidis and Trikalinos 2007)

  • Critically appraise the control of bias in individual trials, as the influence of different components like adequate randomization, blinding, and follow-up cannot be predicted. (Gluud 2006)

  • Report a comprehensive set of results rather than cherry-picking statistically significant or otherwise favorable findings, in order to minimize the risk of introducing publication bias in situ (PBIS) and ensure the validity of the overall body of literature. (Phillips 2004)

  • Be aware of the potential for the “winners curse” in scientific publication, whereby the most extreme and spectacular results may be preferentially published, potentially leading to overestimation and distortion of the true relationship being studied. (n.d.)

  • Carefully compare the characteristics of clinical trials reported in regulatory submissions to the FDA with those reported in published journal articles, as discrepancies in primary outcomes, statistical significance, and conclusions may indicate selective reporting or publication bias. (n.d.)

2.2 Replication Crisis

  • Carefully consider the trade-offs involved in choosing between pursuing novel but risky hypotheses (low prestudy probabilities) versus more reliable but less exciting ones (high prestudy probabilities), as well as the impact of statistical power and sample size on the reliability of published research. (Campbell 2022)

  • Be aware of the potential impact of researcher degrees of freedom or forking paths, which refers to the numerous ways that data can be analyzed and presented, leading to potentially spurious results. Therefore, it is recommended to pre-register analysis plans and avoid cherry-picking results based on post-hoc data exploration. (Gelman 2022)

  • Ensure strict separation between training and test data in predictive modeling, avoiding practices like imputing missing values using information from both datasets, reusing imputed datasets for training and testing, and using proxy variables for the target variable, as these can introduce data leakage and lead to overly optimistic performance claims. (Pineau et al. 2020)

  • Embrace variation and uncertainty, avoid the temptation to seek statistical significance as a definitive proof of an effect, and recognize the limitations of peer review and statistical significance testing in ensuring the accuracy of scientific findings. (Gelman 2018)

  • Ensure adequate statistical power to reduce the risk of false positives and effect size exaggeration, particularly in cognitive neuroscience where power tends to be lower than in psychology. (Szucs and Ioannidis 2017)

  • Prioritize collecting high-quality data through larger sample sizes, reducing measurement error, and employing within-person designs, while avoiding the pitfalls of null hypothesis significance testing and selectively reporting statistically significant results. (Gelman 2017)

  • Prioritize minimizing the rate of false positives and increasing the base rate of true hypotheses to enhance the reliability of scientific discovery, particularly through replication efforts. (McElreath and Smaldino 2015)

  • Consider implementing practices such as large-scale collaborative investigation, replication culture, registration, exchange, reproducibility practices, use of appropriate statistical methods, standardization of definitions and analysis techniques, stricter levels for claiming discoveries or successes, improved study design standards, better communication and dissemination systems, and increased training of scientific workforces in methodology and statistics to improve the reliability and effectiveness of your research. (J. P. A. Ioannidis and Khoury 2014)

  • Move beyond relying solely on p-values and instead incorporate multiple factors such as effect sizes, plausible mechanisms, and replication efforts to ensure robust and reliable conclusions. (Gaudart et al. 2014)

  • Avoid relying solely on statistical significance as a measure of the validity of your findings, as it provides limited information about the likelihood of an effect being real or not. Instead, researchers should consider other factors such as prior evidence and biological plausibility to support your conclusions. (Sullivan 2007)

  • Acknowledge and discuss the limitations of your work in a dedicated section, as this helps readers understand the validity and applicability of the findings, and ultimately contributes to the integrity and transparency of the scientific literature. (J. P. A. Ioannidis 2007)

  • Aim for multiple replications of statistically significant findings to improve the positive predictive value (PPV) of true relationships, particularly when the pre-study odds of a true relationship are low. (Moonesinghe, Khoury, and Janssens 2007)

  • Strive to make your work reproducible by creating and sharing a replication dataset containing all necessary information to reproduce your results, including raw data, codebooks, software, and analytic scripts. (King 1995)

  • Prioritize high-quality, well-powered studies with appropriate controls and careful consideration of potential sources of bias, as initial findings may be subject to the Proteus Phenomenon, where subsequent studies reveal smaller or even contradictory effects. (n.d.)

  • Prioritize conducting rigorous, unbiased studies that address important questions and utilize appropriate methods, including representative samples, valid measures, and robust analytic strategies, to minimize the likelihood of producing misleading or incorrect findings. (NA?)

  • Prioritize collaboration and transparent communication during the replication process, recognizing that transparency alone may not be sufficient to ensure reproducible results due to potential issues related to study design, data quality, and other contextual factors. (NA?)

2.3 Open Science

  • Prioritize open science practices such as open access, open data, preregistration, reproducible analyses, replications, and teaching open science to enhance the transparency, reproducibility, and credibility of your work. (Crüwell et al. 2019)

  • Consider using natural experiments, like the unexpected NIH-DuPont agreements that suddenly provided low-cost access to hundreds of genetically engineered mice, to identify causal relationships in observational data. (Murray et al. 2016)

2.4 Research Misconduct

  • Ensure transparency and accountability in your work by promptly addressing and disclosing any instances of noncompliance or misconduct discovered by regulatory bodies like the FDA, rather than allowing them to go unreported in the peer-reviewed literature. (Seife 2015)

  • Prioritize integrity in all stages of the scientific process, as even seemingly minor instances of misconduct such as redundant publication, failure to disclose conflicts of interest, or fabrication of data can compromise the validity and reliability of findings. (Williams 1997)

2.5 Peer Review

  • Consider the tradeoff between the benefits of expertise and the risks of bias when selecting evaluators, as the authors demonstrate that evaluators with expertise in a specific field may have an informational advantage in separating good projects from bad, but they may also exhibit personal preferences that impact your objectivity, leading to a complex relationship where the benefits of expertise weakly dominate the costs of bias. (Li 2017)

2.6 Questionable Research Practices

  • Consider the impact of publication bias, average power, and the ratio of true to false positives in the literature when interpreting the distribution of p-values, rather than jumping to conclusions about inflated Type 1 error rates due to questionable research practices. (Lakens 2015)

2.7 Reporting Guidelines

  • Receive formal training in writing, use of reporting guidelines, and authorship issues to produce high-quality, transparent, and complete accounts of your research, enabling replication and utilization of results. (Moher and Altman 2015)

2.8 Scientific Skepticism

  • Avoid misinterpreting p-values as the probability of the null hypothesis being true and recognize that the choice of statistical significance threshold is arbitrary and subject to debate. (Woolston 2015)

2.9 Systematic Review

  • Conduct a thorough systematic review before embarking on new studies, ensuring that your work builds upon existing knowledge rather than duplicating efforts or missing crucial context provided by previous research. (Clarke 2004)

2.10 Transparency And Openness Promotion

  • Utilize a consensus-based, comprehensive transparency checklist to ensure openness and accountability throughout the entire research process, including preregistration, methods, results and discussion, and data, code and materials availability. (Aczel et al. 2019)

References

n.d. https://doi.org/10.1371/journal.pmed.0050201.t001.
Aczel, Balazs, Barnabas Szaszi, Alexandra Sarafoglou, Zoltan Kekecs, Šimon Kucharský, Daniel Benjamin, Christopher D. Chambers, et al. 2019. “A Consensus-Based Transparency Checklist.” Nature Human Behaviour 4 (December). https://doi.org/10.1038/s41562-019-0772-6.
Aert, Robbie C. M. van, Jelte M. Wicherts, and Marcel A. L. M. van Assen. 2016. “Conducting Meta-Analyses Based on <i>p</i> Values.” Perspectives on Psychological Science 11 (September). https://doi.org/10.1177/1745691616650874.
———. 2019. “Publication Bias Examined in Meta-Analyses from Psychology and Medicine: A Meta-Meta-Analysis.” PLOS ONE 14 (April). https://doi.org/10.1371/journal.pone.0215052.
Campbell, Harlan. 2022. “The World of Research Has Gone Berserk.” Open Science Framework, August. https://doi.org/10.17605/OSF.IO/YQCVA.
Chuard, Pierre J. C., Milan Vrtílek, Megan L. Head, and Michael D. Jennions. 2019. “Evidence That Nonsignificant Results Are Sometimes Preferred: Reverse p-Hacking or Selective Reporting?” PLOS Biology 17 (January). https://doi.org/10.1371/journal.pbio.3000127.
Clarke, Mike. 2004. “Doing New Research? Don’t Forget the Old.” PLoS Medicine 1 (November). https://doi.org/10.1371/journal.pmed.0010035.
Crüwell, Sophia, Johnny van Doorn, Alexander Etz, Matthew C. Makel, Hannah Moshontz, Jesse C. Niebaum, Amy Orben, Sam Parsons, and Michael Schulte-Mecklenbeck. 2019. “Seven Easy Steps to Open Science.” Zeitschrift Für Psychologie 227 (October). https://doi.org/10.1027/2151-2604/a000387.
Dwan, Kerry, Douglas G. Altman, Juan A. Arnaiz, Jill Bloom, An-Wen Chan, Eugenia Cronin, Evelyne Decullier, et al. 2008. “Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias.” PLoS ONE 3 (August). https://doi.org/10.1371/journal.pone.0003081.
Dwan, Kerry, Douglas G. Altman, Mike Clarke, Carrol Gamble, Julian P. T. Higgins, Jonathan A. C. Sterne, Paula R. Williamson, and Jamie J. Kirkham. 2014. “Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials.” PLoS Medicine 11 (June). https://doi.org/10.1371/journal.pmed.1001666.
Dwan, Kerry, Carrol Gamble, Paula R. Williamson, and Jamie J. Kirkham. 2013. “Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — an Updated Review.” PLoS ONE 8 (July). https://doi.org/10.1371/journal.pone.0066844.
“Editorial Statement on Negative Findings.” 2015. Health Economics 24 (March). https://doi.org/10.1002/hec.3172.
Gaudart, Jean, Laetitia Huiart, Paul J. Milligan, Rodolphe Thiebaut, and Roch Giorgi. 2014. “Reproducibility Issues in Science, Is <i>p</i> Value Really the Only Answer?” Proceedings of the National Academy of Sciences 111 (April). https://doi.org/10.1073/pnas.1323051111.
Gelman, Andrew. 2017. “The Failure of Null Hypothesis Significance Testing When Studying Incremental Changes, and What to Do about It.” Personality and Social Psychology Bulletin 44 (September). https://doi.org/10.1177/0146167217729162.
———. 2018. “Ethics in Statistical Practice and Communication: Five Recommendations.” Significance 15 (October). https://doi.org/10.1111/j.1740-9713.2018.01193.x.
———. 2022. “Criticism as Asynchronous Collaboration: An Example from Social Science Research.” Stat 11 (June). https://doi.org/10.1002/sta4.464.
Gelman, Andrew, and John Carlin. 2014. “Beyond Power Calculations.” Perspectives on Psychological Science 9 (November). https://doi.org/10.1177/1745691614551642.
Gluud, Lise Lotte. 2006. “Bias in Clinical Intervention Research.” American Journal of Epidemiology 163 (January). https://doi.org/10.1093/aje/kwj069.
Ioannidis, John P. A. 2007. “Limitations Are Not Properly Acknowledged in the Scientific Literature.” Journal of Clinical Epidemiology 60 (April). https://doi.org/10.1016/j.jclinepi.2006.09.011.
Ioannidis, John P. A., and Muin J. Khoury. 2014. “Assessing Value in Biomedical Research.” JAMA 312 (August). https://doi.org/10.1001/jama.2014.6932.
Ioannidis, John PA, and Thomas A Trikalinos. 2007. “An Exploratory Test for an Excess of Significant Findings.” Clinical Trials 4 (June). https://doi.org/10.1177/1740774507079441.
Kicinski, Michal. 2013. “Publication Bias in Recent Meta-Analyses.” PLoS ONE 8 (November). https://doi.org/10.1371/journal.pone.0081823.
King, Gary. 1995. “Replication, Replication.” PS: Political Science &Amp; Politics 28 (September). https://doi.org/10.2307/420301.
Lakens, Daniël. 2015. “On the Challenges of Drawing Conclusions from<i>p</i>-Values Just Below 0.05.” PeerJ 3 (July). https://doi.org/10.7717/peerj.1142.
Lakens, Daniël, and Alexander J. Etz. 2017. “Too True to Be Bad.” Social Psychological and Personality Science 8 (May). https://doi.org/10.1177/1948550617693058.
Li, Danielle. 2017. “Expertise Versus Bias in Evaluation: Evidence from the NIH.” American Economic Journal: Applied Economics 9 (April). https://doi.org/10.1257/app.20150421.
Matosin, Natalie, Elisabeth Frank, Martin Engel, Jeremy S. Lum, and Kelly A. Newell. 2014. “Negativity Towards Negative Results: A Discussion of the Disconnect Between Scientific Worth and Scientific Culture.” Disease Models &Amp; Mechanisms 7 (February). https://doi.org/10.1242/dmm.015123.
McElreath, Richard, and Paul E. Smaldino. 2015. “Replication, Communication, and the Population Dynamics of Scientific Discovery.” PLOS ONE 10 (August). https://doi.org/10.1371/journal.pone.0136088.
Moher, David, and Douglas G. Altman. 2015. “Four Proposals to Help Improve the Medical Research Literature.” PLOS Medicine 12 (September). https://doi.org/10.1371/journal.pmed.1001864.
Moonesinghe, Ramal, Muin J Khoury, and A. Cecile J. W Janssens. 2007. “Most Published Research Findings Are False—but a Little Replication Goes a Long Way.” PLoS Medicine 4 (February). https://doi.org/10.1371/journal.pmed.0040028.
Murray, Fiona, Philippe Aghion, Mathias Dewatripont, Julian Kolev, and Scott Stern. 2016. “Of Mice and Academics: Examining the Effect of Openness on Innovation.” American Economic Journal: Economic Policy 8 (February). https://doi.org/10.1257/pol.20140062.
Phillips, Carl V. 2004. “Publication Bias in Situ.” BMC Medical Research Methodology 4 (August). https://doi.org/10.1186/1471-2288-4-20.
Pineau, Joelle, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d’Alché-Buc, Emily Fox, and Hugo Larochelle. 2020. “Improving Reproducibility in Machine Learning Research (a Report from the NeurIPS 2019 Reproducibility Program).” arXiv. https://doi.org/10.48550/ARXIV.2003.12206.
Scheel, Anne M., Mitchell R. M. J. Schijen, and Daniël Lakens. 2021. “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” Advances in Methods and Practices in Psychological Science 4 (April). https://doi.org/10.1177/25152459211007467.
Schmucker, Christine M., Anette Blümle, Lisa K. Schell, Guido Schwarzer, Patrick Oeller, Laura Cabrera, Erik von Elm, Matthias Briel, and Joerg J. Meerpohl. 2017. “Systematic Review Finds That Study Data Not Published in Full Text Articles Have Unclear Impact on Meta-Analyses Results in Medical Research.” PLOS ONE 12 (April). https://doi.org/10.1371/journal.pone.0176210.
Schmucker, Christine, Lisa K. Schell, Susan Portalupi, Patrick Oeller, Laura Cabrera, Dirk Bassler, Guido Schwarzer, et al. 2014. “Extent of Non-Publication in Cohorts of Studies Approved by Research Ethics Committees or Included in Trial Registries.” PLoS ONE 9 (December). https://doi.org/10.1371/journal.pone.0114023.
Seife, Charles. 2015. “Research Misconduct Identified by the US Food and Drug Administration.” JAMA Internal Medicine 175 (April). https://doi.org/10.1001/jamainternmed.2014.7774.
Song, F, S Parekh, L Hooper, YK Loke, J Ryder, AJ Sutton, C Hing, CS Kwok, C Pang, and I Harvey. 2010. “Dissemination and Publication of Research Findings: An Updated Review of Related Biases.” Health Technology Assessment 14 (February). https://doi.org/10.3310/hta14080.
Song, Fujian, Yoon Loke, and Lee Hooper. 2014. “Why Are Medical and Health-Related Studies Not Being Published? A Systematic Review of Reasons Given by Investigators.” PLoS ONE 9 (October). https://doi.org/10.1371/journal.pone.0110418.
Sullivan, Patrick F. 2007. “Spurious Genetic Associations.” Biological Psychiatry 61 (May). https://doi.org/10.1016/j.biopsych.2006.11.010.
Szucs, Denes, and John P. A. Ioannidis. 2017. “Empirical Assessment of Published Effect Sizes and Power in the Recent Cognitive Neuroscience and Psychology Literature.” PLOS Biology 15 (March). https://doi.org/10.1371/journal.pbio.2000797.
Tricco, Andrea C., Jennifer Tetzlaff, Margaret Sampson, Dean Fergusson, Elise Cogo, Tanya Horsley, and David Moher. 2008. “Few Systematic Reviews Exist Documenting the Extent of Bias: A Systematic Review.” Journal of Clinical Epidemiology 61 (May). https://doi.org/10.1016/j.jclinepi.2007.10.017.
Williams, Nigel. 1997. “Editors Seek Ways to Cope with Fraud.” Science 278 (November). https://doi.org/10.1126/science.278.5341.1221.
Woolston, Chris. 2015. “Psychology Journal Bans p Values.” Nature 519 (February). https://doi.org/10.1038/519009f.