Automated Syllabus of Frequentist Statistics Papers

Built by Rex W. Douglass @RexDouglass ; Github ; LinkedIn

Papers curated by hand, summaries and taxonomy written by LLMs.

Submit paper to add for review

Statistical Clarity, Validity, and Interpretation

> Addressing Limitations, Uncertainty, and Transparency in Statistics

>> Meeting Criteria for Correct Probability Statements
  • Ensure your statistical analyses meet three criteria for a correct statement of probability: (1) there is a measurable reference set, (2) the subject belongs to the set, and (3) no relevant sub-set can be recognized. (“Science,” n.d.)
>> Addressing Assumptions and Enhancing Estimation Techniques
  • Carefully consider whether individuals decision-making processes align with the assumptions of the Savage axioms, particularly in situations characterized by ambiguous uncertainty, as deviations from these axioms can limit the applicability of probabilistic models and necessitate alternative decision rules. (Ellsberg 1961)

  • Prioritize using confidence intervals over simply testing hypotheses, as confidence intervals offer valuable information about the precision and reliability of estimates, while hypothesis testing alone provides limited insight into the magnitude and direction of effects. (NA?)

>> Improving Inference through Alternative Metrics and Techniques
  • Consider using randomization inference instead of classic inferential tools, as it provides a more robust and flexible approach to hypothesis testing in experiments by enabling direct estimation of uncertainty about internal validity without assuming random sampling or parametric distributions. (Keele, McConnaughy, and White 2012)

  • Avoid relying solely on statistical significance as a measure of the validity of your findings, as it provides limited information about the likelihood of an effect being real or not. Instead, researchers should consider other factors such as prior evidence and biological plausibility to support your conclusions. (P. F. Sullivan 2007)

>> Robustness Checks via Sensitivity Analyses and Assumptions Discussion
  • Acknowledge the inherent limitations of your studies, conduct sensitivity analyses to examine the robustness of your findings, and engage in open discussions about the assumptions underlying your statistical models. (Leamer 2010)
>> Embracing Context, Causality, and Evidence Integrity
  • Carefully distinguish between descriptive and causal inferences, utilizing counterfactual frameworks and potential outcome models to accurately estimate causal effects while controlling for confounding factors. (Gelman and Vehtari 2020)

  • Carefully consider the compatibility of your chosen statistical methods with the underlying philosophy of science, specifically emphasizing the importance of error-statistical severe testing in promoting scientific progress. (Gelman et al. 2019)

  • Carefully consider the appropriateness of combining evidence across different studies or data sources, taking into account factors such as commonness, bias, precision, and validity, to avoid destroying or wasting valuable evidence. (Smaldino and McElreath 2016)

  • Embrace transparency, consensus, impartiality, and correspondence to observable reality while acknowledging multiple perspectives and context dependence, rather than focusing solely on achieving objectivity or avoiding subjectivity. (Gelman and Hennig 2015)

>> Probabilistic Nature of p-Values and Their Limitations
  • Avoid conflating the statistical position of data with a decision regarding a hypothesis, and instead recognize the inherently probabilistic nature of p-values and your limitations in drawing definitive conclusions. (J. P. A. Ioannidis 2005)
>> Multiversal Methods for Model Specification Uncertainty
  • Employ multiversal methods to capture the uncertainty associated with various model specifications and avoid cherry-picking statistically significant results, thereby addressing the replication crisis and promoting transparency in scientific research. (Korbmacher et al. 2023)

> Optimizing Data Analysis and Education Approaches

>> Redesigning Undergraduate Statistics Curriculum for Accessibility
  • Prioritize making fundamental concepts accessible and minimizing prerequisites to research when redesigning undergraduate statistics curriculums, recognizing the converging trends in the roles of mathematics, computation, and context within statistics education. (Cobb 2015)
>> Multiple Outcome Measures for Enhanced Power and Interpretability
>> Promoting Simplicity and Robustness in Empirical Studies
  • Strive for simplicity and clarity in your theoretical models, ensuring that they provide valuable insights applicable to a wide range of problems, while maintaining meticulousness and transparency in your empirical studies to avoid errors and ensure robustness against potential criticisms. (Bowmaker 2012)
>> Bayesian Inference for Reliability and Risk Assessment
  • Adopt a Bayesian perspective when analyzing reliability and risk data, incorporating prior knowledge and updating beliefs based on observed evidence using Bayes theorem. (Singpurwalla 2006)
>> Dempster-Shafer Theory for Uncertainty Representation
>> Generalized Likelihood Ratio Tests for Improved Inference
  • Consider the use of generalized likelihood ratio tests when constructing statistical tests, as they possess desirable finite sample and asymptotic properties. (Mittelhammer 2013)

  • Consider the use of generalized likelihood ratio tests when constructing statistical tests, as these tests possess desirable finite sample and asymptotic properties. (NA?)

> Best practices for robust and interpretable statistical analyses

>> Theory-based vs. data-driven suppressor variables
  • Carefully distinguish between intentionally introducing a variable as a suppressor to improve the relationship between the variable of interest and the criterion, and observing a statistical effect where a variable acts as a suppressor without a theoretical basis, as the former allows for meaningful interpretation while the latter may only offer a superficial explanation. (Burns and Ludlow 2005)
>> Avoiding pitfalls in causal estimation and interpretation
  • Carefully justify your choice of functional form for the counterfactual trend in difference-in-differences analyses, recognizing that different functional forms can lead to different estimates of treatment effects. (Imbens and Angrist 1994)
>> Bayesian Inference for Covariation Assessment
  • Consider adopting a Bayesian inferential approach instead of a traditional descriptive approach when studying covariation assessment, as this perspective better explains why participants are influenced by prior beliefs and why they perceive the four cells of a 2x2 contingency matrix as differentially informative. (NA?)
>> Balancing Model Flexibility, Predictive Accuracy, and Replicability
  • Extend the multiverse analysis beyond just data cleaning and analytic decisions to incorporate variations in data-collection methods, thereby enabling a comprehensive examination of the impact of researcher decisions on study results. (Harder 2020)

  • Specify your statistical models a priori, avoiding univariate prescreening of predictors or excessive model tinkering, as predetermined models with sufficient data are more likely to yield replicable results. (Babyak 2004)

  • Prioritize predictive accuracy over adherence to traditional data modeling assumptions, recognizing that multiple models may fit the data equally well while yielding different predictions. (Breiman 2001)

>> Long-Term Self-Experimentation: Addressing Bias Concerns
  • Consider using long-term self-experimentation as a valuable tool for generating novel ideas, particularly in areas where traditional methods may be limited or lacking, despite concerns about potential bias due to expectations. (Roberts 2004)
>> Meta-Analysis Techniques Correcting Artifacts Across Multiple Studies
  • Use meta-analysis to integrate findings across multiple studies while correcting for various artifacts such as sampling error, measurement error, range restriction, dichotomization, imperfect construct validity, attrition, extraneous factors, computational errors, and biased samples. (Schmidt and Hunter 2015)

> Causal Inference, Data Analysis, and Model Selection

>> Hierarchical Models for Enhanced Causal Inference
  • Consider collecting and analyzing hierarchical data, as it can enable causal identification even when it would be impossible with non-hierarchical data alone, due to the potential for natural experiments created by holding higher-level confounders constant. (Weinstein and Blei 2024)

  • Utilize hierarchical models to accurately capture the complexity inherent in real-world data structures, particularly when dealing with causal effects, as they provide a flexible framework for accounting for data collection processes, adjusting for unmeasured covariates, and modeling variation in treatment effects. (Dehejia 2005)

>> Secondary Data Analysis with Proper Documentation Review
  • Thoroughly understand the original study and data before conducting secondary data analysis, including reading all relevant documentation, determining the appropriate analysis weights and methods for computing adjusted standard errors, and understanding how missing data was handled. (Schlomer, Bauman, and Card 2010)
>> Addressing Assumptions, Heterogeneity, and Probabilistic Reduction
  • Account for heteroscedasticity when performing statistical analyses by employing robust standard errors such as the Eicker-Huber-White (EHW) estimator, which produces consistent estimates even in the presence of heteroscedasticity. (Ding 2024)

  • Carefully distinguish between population and sample when conducting statistical analyses, and choose appropriate statistical methods based on the type of data collected (i.e., categorical vs. quantitative). (Ekstrom and Sørensen 2014)

  • Carefully consider the assumptions underlying your chosen causal inference approach (e.g., randomized experiments, regression on treatment variable, observational studies), ensure that these assumptions are met, and interpret results accordingly to avoid biased estimates. (C. E. Ross 1996)

  • Adopt a “probabilistic reduction” approach to modeling observational data, which involves building statistical models solely from statistical information about observable random variables rather than attributing probabilistic structure to unobserved error terms. (Spanos 1984)

>> Ignorability Principle for Unbiased Treatment Effect Estimation
  • Use the principle of ignorable treatment assignment to ensure valid causal inferences, which assumes that treatment assignments are independent of potential outcomes given observed covariates, allowing for unbiased estimation of average treatment effects through various statistical techniques such as regression, matching, or inverse probability weighting. (Imbens and Rubin 2015)
>> Preemptive Consideration for Unbiased Estimates and Sound Inferences
  • Carefully consider and interpret each potential outcome in advance of collecting data, and ensure that your study design minimizes selection biases and provides valid and reliable estimates to support informed decisions based on sound statistical inferences. (NA?)

> Balancing Simplicity, Complexity, and Nuance in Causal Inference

>> Balanced Approaches to Maximizing Leverage, Fit, and Reliability
  • Clearly define your estimand - the specific quantity they aim to estimate - outside of any statistical model, as a unit-specific quantity (such as an average causal effect or population mean) and a target population, allowing for greater flexibility in choosing estimation procedures and asking more interesting theoretical questions. (Lundberg, Johnson, and Stewart 2020)

  • Adhere to established rules of inference in order to ensure the reliability of your empirical findings, rather than applying the “rules” of persuasion and advocacy, which can lead to biased and potentially misleading results. (Imai, King, and Rivera 2020)

  • Aim for a high degree of theoretical fit, ensuring that your research design aligns closely with the theory being tested, providing a severe and partitioned test that eliminates rival hypotheses, while considering practical constraints and the need for cumulative research. (Gerring 2010)

  • Strive to maximize leverage - explaining as much as possible with as little as possible - while minimizing bias and reporting estimates of uncertainty in your conclusions. (G. King, Keohane, and Verba 1995)

>> Balanced Approaches for Robust Scientific Findings
>> Embracing Simplicity over Unnecessary Complexity in Theories
  • Avoid falling into “nuance traps” by prioritizing simplicity and constraint in your theoretical frameworks, rather than embracing unnecessary complexity or constantly seeking to add new dimensions. (Healy 2017)
>> Case Study Design and Philosophy of Science Considerations
  • Recognize the case study method as an intensive study of a single unit for the purpose of understanding a larger class of units, using covariational evidence within and across units to make inferences about causality. (GERRING 2004)
>> Triangulation Assumptions Evaluation for Robust Causal Inferences
  • Carefully evaluate the assumptions underpinning your choice of triangulation method, recognizing that different forms of triangulation serve distinct purposes and make varying philosophical or methodological assumptions. (Herbert 2018)
>> Balancing Determinism vs Probabilism in Small N Analysis
  • Carefully consider the underlying assumptions and truth conditions of your chosen theory of causation when selecting a method for causal inference, as mismatches between these elements can lead to confusion and inconsistencies in findings. (Rohlfing and Zuber 2019)

  • Integrate qualitative and quantitative data using a Bayesian framework to improve causal inferences, allowing them to update prior beliefs about causal effects, assignment propensities, and the informativeness of different types of evidence, and optimize your research designs by determining the optimal combinations of qualitative and quantitative data collection under various research conditions. (HUMPHREYS and JACOBS 2015)

  • Carefully consider the choice of causal inference strategy in small-N analysis, as the use of nominal, ordinal, or within-case analysis leads to fundamentally different logics of causal inference, with nominal comparison being deterministic and focusing on necessary and sufficient conditions, ordinal comparison allowing for probabilistic causation, and within-case analysis providing opportunities for fine-grained process tracing and causal narratives. (MAHONEY 2000)

>> Integrating Qualitative and Quantitative Approaches
  • Aim to conduct rigorous, well-designed studies incorporating both qualitative and quantitative data, recognizing that neither type of data alone is sufficient for robust causal inferences. (Beck 2010)

  • Appreciate the fundamental differences in the goals and assumptions of qualitative and quantitative research traditions, particularly regarding approaches to explanation, conceptions of causation, and case selection practices, to facilitate better cross-traditional communication and collaboration. (Mahoney and Goertz 2006)

  • Appreciate the fundamental differences in the goals and assumptions of qualitative and quantitative research traditions, particularly regarding approaches to explanation, conceptions of causation, and case selection practices, to facilitate better cross-traditional communication and collaboration. (Mahoney and Goertz 2006)

>> Set Relationships and Artificial Data Avoidance in QCA
  • Be aware that the conservative (QCA-CS) and intermediate (QCA-IS) solution types of Qualitative Comparative Analysis (QCA) may introduce artificial data, leading to incorrect causal inferences, while the parsimonious solution type (QCA-PS) does not suffer from this issue. (Thiem 2019)

  • Evaluate set-theoretic relationships using measures of consistency and coverage, which respectively assess the degree to which a subset relation has been approximated and the empirical relevance of a consistent subset, rather than transforming set relations into correlational hypotheses and using standard correlational techniques. (Ragin 2006)

>> Race as Attribute, Not Cause: Limitations & Alternatives
  • Avoid making the assumption of temporal stability in regression models when analyzing racial disparities, as this risks reifying race as a fixed trait rather than recognizing it as a dynamic and relational process that interacts with other systems of social stratification over time. (Graetz, Boen, and Esposito 2022)

  • Recognize the limitations of using race as a causal variable in statistical models, as it is an unalterable characteristic of individuals and therefore inappropriate for inferential analysis. Instead, race should be treated as an attribute in associational models, allowing for more accurate and nuanced conclusions about its relationship with other variables. (Zuberi 2000)

  • Recognize the limitations of using race as a causal variable in statistical models, as it is an unalterable characteristic of individuals and therefore inappropriate for inferential analysis. Instead, race should be treated as an attribute in associational models, allowing for more accurate and nuanced conclusions about its relationship with other variables. (NA?)

> Avoiding Pitfalls and Enhancing Robustness in Data Analysis

>> Improving Causal Inferences through Better Model Selection
  • Be aware of the potential impact of researcher degrees of freedom or forking paths, which refers to the numerous ways that data can be analyzed and presented, leading to potentially spurious results. Therefore, it is recommended to pre-register analysis plans and avoid cherry-picking results based on post-hoc data exploration. (Gelman 2022)

  • Explicitly specify your theoretical concepts and relationships using causal diagrams, and then carefully consider the potential sources of error and contamination in your chosen proxies, as this can significantly impact the validity of your inferences. (Duarte et al. 2021)

  • Employ Inference to the Best Explanation (IBE) as a framework for evaluating evidence, recognizing that while causal identification is important, it alone does not produce generalizable knowledge without considering how it connects to theory, where it is valid, and which part of the treatment produces the effect. (Egami et al. 2018)

  • Avoid “garbage can” models with numerous correlated independent variables, as they can lead to collinearity issues, misleading results, and difficulty in interpreting the effects of individual variables. (Schrodt 2013)

  • Evaluate models based on your usefulness for a specific purpose, rather than solely on your predictive accuracy. (K. A. Clarke and Primo 2007)

  • Ensure the temporal duration of treatment effects aligns with the real-world scenarios they aim to investigate, as failing to do so risks drawing misleading conclusions about the political significance of your findings. (Gaines, Kuklinski, and Quirk 2007)

  • Expand your understanding of causality beyond solely focusing on changes in the mean of a dependent variable distribution, and consider variance-altering causation as a valuable alternative perspective for identifying and interpreting causal effects. (Braumoeller 2006)

  • Consider using experiments, specifically randomized trials, to establish causality and inform theoretical development, as they provide transparent and controlled procedures that allow for precise estimation of the effects of institutional rules, preference configurations, and other contextual factors. (DRUCKMAN et al. 2006)

  • Exercise caution when using linear link functions to model conditional monotonic relationships, as even minor departures from linearity can result in biased and inconsistent estimates, potentially leading to erroneous conclusions. (Achen 2005)

>> Addressing Confounding Variables and Model Limitations
  • Carefully consider potential confounding variables when making causal inferences about politically motivated reasoning, as common experimental designs such as Outcome Switching and Party Cues often violate the excludability assumption by altering variables beyond political motivation that impact reasoning outcomes. (Tappin, Pennycook, and Rand 2020)

Publication Bias, P-hacking, & Replication Challenges

> Addressing Selective Reporting & Power Issues

>> Improving Reliability through Higher Power & Experiment Design
  • Aim for high statistical power (preferably above 80%) and utilize experimental research designs whenever feasible to minimize selective reporting and enhance the reliability of published research in the environmental sciences. (Askarov et al. 2022)

  • Prioritize conducting studies with adequate statistical power to minimize the risk of both false negative and false positive results, particularly in policy-important contexts. (J. P. A. Ioannidis, Stanley, and Doucouliagos 2017)

  • Be aware of and mitigate against the risk of inflated significance caused by selective reporting, p-hacking, and publication bias, especially in fields with low statistical power and non-experimental designs. (“American Economic Journal: Applied Economics,” n.d.)

  • Aim for high statistical power (preferably above 80%) and utilize experimental research designs whenever feasible to minimize selective reporting and enhance the reliability of published research in the environmental sciences. (NA?)

>> Improving Replication Success Rates & Mitigating Biases
  • Avoid using the replication rate as a measure of selective publication, as it is insensitive to the degree of selective publication on insignificant results for a fixed latent distribution of studies and is bounded above by its nominal target due to issues with common power calculations in replication studies. (Vu 2022)

  • Be aware of the potential for inflated estimates of replication power when using the common power rule, particularly when power in original studies is low, as this can lead to unrealistic expectations for replication success rates. (Armitage, McPherson, and Rowe 1969)

>> Correcting Publication Bias via Systematic Replications & Meta-Studies
  • Consider the potential for publication bias due to selective reporting of statistically significant results, and propose methods for identifying and correcting for this bias using either systematic replication studies or meta-studies. (Fithian, Sun, and Taylor 2014)
>> Improving Robustness through Better Design & Transparency
  • Carefully consider the trade-offs involved in choosing between pursuing novel but risky hypotheses (low prestudy probabilities) versus more reliable but less exciting ones (high prestudy probabilities), as well as the impact of statistical power and sample size on the reliability of published research. (Campbell 2022)

  • Consider adopting Registered Reports (RRs) as a publication format, where peer review and the decision to publish occur before results are known, in order to reduce publication bias and increase the credibility of findings. (Scheel, Schijen, and Lakens 2021)

  • Interpret mixed results (i.e., a combination of both significant and non-significant findings) as potentially providing strong evidence for the alternative hypothesis, especially when statistical power is high and Type I error rates are controlled. (Lakens and Etz 2017)

  • Ensure adequate statistical power to reduce the risk of false positives and effect size exaggeration, particularly in cognitive neuroscience where power tends to be lower than in psychology. (Szucs and Ioannidis 2017)

  • Consider the impact of publication bias, average power, and the ratio of true to false positives in the literature when interpreting the distribution of p-values, rather than jumping to conclusions about inflated Type 1 error rates due to questionable research practices. (Lakens 2015)

> Detecting & Preventing P-Hacking & Publication Bias

>> Detecting P-hacking using Non-Increasing Property & Alternative Approaches
  • Be aware of the limitations of traditional tests for detecting p-hacking, especially when dealing with complex empirical scenarios involving multiple testing situations, and consider alternative approaches such as bound tests and discontinuity tests to increase detection power. (Elliott, Kudrin, and Wüthrich 2022)
>> Detecting & Mitigating Biases via P-Curve Analysis
  • Carefully consider and address the risk of selective reporting of nonsignificant results and reverse P-hacking, where researchers manipulate data or analyses to achieve nonsignificant results, as these practices can distort the scientific literature and lead to biased estimates of effect sizes. (Chuard et al. 2019)

  • Be cautious about making inferences regarding true effects or p-hacking based solely on the shape of p-curves in observational research, as right-skewed p-curves can arise from both true effects and null effects with omitted-variable biases, leading to potential false inferences. (Bruns and Ioannidis 2016)

  • Utilize p-curve analysis to correct for publication bias when estimating effect sizes, as it offers superior performance compared to traditional methods such as Trim and Fill. (Simonsohn, Nelson, and Simmons 2014)

>> Addressing P-hacking through transparency and informed hypothesis testing
  • Be aware of and avoid the practice of p-hacking, which involves manipulating data or analyses to achieve statistically significant results, as it can lead to biased or misreported effect sizes, inflated Type I errors, and distorted meta-analytic summaries. (Gupta and Bosco 2023)

  • Avoid engaging in p-hacking, which involves manipulating data analysis methods to produce statistically significant results, as it leads to inflated false-positive rates and undermines the credibility of scientific findings. (Stefan and Schönbrodt 2023)

  • Consider your prior beliefs in your hypotheses and adjust your statistical tests accordingly to avoid inflated type-1 error rates and ensure robust and reliable findings. (Golubnitschaja et al. 2016)

> Publication Bias Detection & Mitigation Strategies in Meta-Analysis

>> Publication Bias Adjustment Techniques & Limitations
  • Carefully consider the limitations of various publication bias detection methods, such as low statistical power and assumptions of homogeneous true effect size, and choose appropriate methods based on the specific context and characteristics of your meta-analysis. (Robbie C. M. van Aert, Wicherts, and Assen 2019)

  • Exercise caution when interpreting the results of meta-analyses using p-uniform and p-curve methods, particularly when dealing with heterogeneous data sets, as these methods may produce erratic behavior, implausible estimates, or overestimate effect sizes. (Robbie C. M. van Aert, Wicherts, and Assen 2016)

  • Be aware of potential publication biases when conducting meta-analyses, particularly when the probability of publication is a function of the observed p-value or effect size, as this can result in biased estimation of the true effect size. (Liu et al. 2016)

  • Consider using sensitivity analysis with a priori weight functions to account for potential publication bias in meta-analysis, particularly when data is sparse and traditional weight-function techniques are not feasible. (Vevea and Woods 2005)

  • Employ the trim and fill method, a simple funnel-plot-based approach, to identify and adjust for potential publication bias in meta-analysis, thereby improving the accuracy of effect size estimation and increasing the reliability of confidence interval coverage. (Duval and Tweedie 2000)

  • Carefully distinguish between p-hacking (selective reporting within studies) and publication bias (selective publication of entire studies), as the former appears to be 20-30% more prevalent and contributes significantly to selection bias in the economic literature, potentially compromising the perceived reliability of published findings. (NA?)

>> Dependent Effect Sizes Considerations for Selective Reporting
  • Carefully consider and appropriately address the issue of dependent effect sizes when investigating selective reporting in meta-analyses, as failure to do so can lead to biased results and incorrect conclusions. (NA?)
>> Publication Bias Prevention & Correction Techniques in Meta-Analysis
  • Carefully consider and address potential publication bias when interpreting the results of meta-analyses, particularly if statistically significant outcomes are overrepresented in the sample of studies included. (Kicinski 2013)

  • Be aware of and attempt to correct for potential biases leading to an excess of statistically significant findings in a body of evidence, such as publication bias, selective analyses, and selective outcome reporting. (J. P. Ioannidis and Trikalinos 2007)

>> Internal Meta-Analysis Risks Due To Selective Reporting
  • Avoid using internal meta-analysis unless they can ensure that absolutely no results were selectively reported, which requires rigorous preregistration and adherence to prespecified analytic plans, since even minimal levels of selective reporting due to p-hacking or file-drawering can dramatically inflate the false-positive rate in an internal meta-analysis. (Vosgerau et al. 2019)

> Improving Reliability & Transparency in Scientific Reporting

>> Addressing Misconceptions & Promoting Transparency in Data Analysis
  • Maximize transparency in reporting reaction time data pre-processing steps, including specifying the order of operations and rationale behind each choice, to ensure reproducibility and avoid misleading conclusions due to undocumented analytical flexibility. (Loenneker et al. 2024)

  • Carefully consider the implications of using p-values as measures of evidence, as they are often misinterpreted and lack explicit alternative hypotheses, leading to potential issues in the scientific literature. (Bahadur and Savage 1956)

>> Collaborative Analysis Approaches for Error Minimization
  • Adopt a “co-pilot” approach to statistical analysis, involving at least two individuals independently executing and reviewing the analyses, to improve the accuracy of reported results and minimize errors. (Veldkamp et al. 2014)
>> Preventing False Positives & Promoting Open Science
  • Prioritize open science practices such as open access, open data, preregistration, reproducible analyses, replications, and teaching open science to enhance the transparency, reproducibility, and credibility of your work. (Crüwell et al. 2019)

  • Avoid confusing exploratory and confirmatory data analysis, as doing so increases the risk of false positive findings due to practices such as HARKing and p-hacking. (NA?)

>> Enhancing Robustness in Replication Studies with Alternative Metrics
  • Incorporate Bayesian methods to account for publication bias and ensure sufficient statistical power in your studies, particularly when attempting to replicate previous work. (Etz and Vandekerckhove 2016)

  • Be cautious about interpreting too many successful replications as evidence of a true effect, as it could potentially indicate the presence of publication bias or questionable research practices, which can be detected through statistical consistency tests. (Aad et al. 2012)

  • Avoid relying solely on p-values for statistical inference due to your inherent unreliability and lack of precision in predicting future replications, and instead utilize confidence intervals and meta-analytic thinking to provide more robust and accurate insights into the likelihood of replicating experimental results. (Cumming 2008)

  • Prioritize reporting effect sizes and confidence intervals alongside (p) values, and consider (p<.05) as insufficient grounds for claiming replicability of an isolated non-null finding. (NA?)

  • Consider using a Bayesian replication test to quantify the degree of similarity between the effect size estimated in an original study and that estimated in a replication attempt, rather than relying solely on traditional frequentist approaches such as comparing p-values or effect size estimates. (NA?)

> Promoting Rigor, Transparency, and Accuracy in Scientific Discovery

>> Promoting Unbiased Studies Reflecting Complexity & Importance
  • Prioritize conducting rigorous, unbiased studies that accurately reflect the complexity of human health, rather than focusing solely on producing novel or exciting results. (Milunsky 2003)

  • Prioritize conducting rigorous, unbiased studies that address important questions and utilize appropriate methods, including representative samples, valid measures, and robust analytic strategies, to minimize the likelihood of producing misleading or incorrect findings. (NA?)

>> Addressing Limitations for Validity and Applicability
  • Acknowledge and discuss the limitations of your work in a dedicated section, as this helps readers understand the validity and applicability of the findings, and ultimately contributes to the integrity and transparency of the scientific literature. (J. P. A. Ioannidis 2007)
>> Improving Validity Through Replication, Negative Results, and Study Quality
  • Prioritize minimizing the rate of false positives and increasing the base rate of true hypotheses to enhance the reliability of scientific discovery, particularly through replication efforts. (McElreath and Smaldino 2015)

  • Prioritize sharing negative results, even if they are not as highly valued in the current scientific culture, because they contribute to filling gaps in knowledge and moving towards unabridged science. (Matosin et al. 2014)

  • Aim for multiple replications of statistically significant findings to improve the positive predictive value (PPV) of true relationships, particularly when the pre-study odds of a true relationship are low. (Moonesinghe, Khoury, and Janssens 2007)

  • Prioritize high-quality, well-powered studies with appropriate controls and careful consideration of potential sources of bias, as initial findings may be subject to the Proteus Phenomenon, where subsequent studies reveal smaller or even contradictory effects. (n.d.)

  • Consider the prior odds of your hypothesis when interpreting the results of your study, as the traditional hierarchy of evidence may be influenced by differences in prior probabilities rather than solely reflecting the inherent strengths and weaknesses of various study designs. (n.d.)

> Publication Bias Correction Techniques in Meta-Analysis

>> Publication Bias & Heterogeneity in College Wage Premiums
  • Be aware of and correct for potential publication bias and heterogeneity in your analyses, particularly when conducting meta-analyses, as demonstrated by the finding that the college wage premium varies significantly depending on factors such as gender, unemployment rate, and field of study, and may be overestimated due to publication bias. (Horie and Iwasaki 2022)
>> Publication Bias Mitigation with Modern Meta-Analytic Techniques
  • Prioritize transparency and reproducibility in your meta-analysis by having a clear and comprehensive literature search strategy, including multiple sources and databases, and by involving multiple coders to minimize errors in data extraction. (Irsova et al. 2023)

  • Avoid relying solely on inverse-variance weighting in meta-analysis, especially when analyzing observational studies, as it can lead to spurious precision and biased estimates. Instead, they propose using the Meta-Analysis Instrumental Variable Estimator (MAIVE), which employs inverse sample size as an instrument for reported variance, to mitigate this issue. (Robbie Cornelis Maria van Aert and Assen 2018)

  • Carefully consider and address publication bias and model uncertainty in meta-analyses, using modern techniques such as Bayesian model averaging and publication bias correction methods like MAIVE, to ensure accurate and unbiased estimates of the true effect size. (Ehrenberg et al. 2001)

>> Meta-analytic Approaches for Estimating Average Effect Sizes
  • Utilize meta-analysis techniques to aggregate findings from multiple randomized controlled trials, taking into account potential heterogeneity in treatment effects across studies, in order to accurately estimate the average effects of financial education programs on financial knowledge and behavior. (Borenstein et al. 2009)

  • Utilize meta-analysis techniques to aggregate findings from numerous studies, particularly when dealing with a vast body of evidence, in order to accurately estimate the average effects of a particular program and examine the heterogeneity in reported findings. (NA?)

>> Meta-analysis of RCTs on Nudging Tax Compliance
  • Consider conducting meta-analyses of randomized controlled trials (RCTs) to assess the effectiveness of nudges in improving tax compliance, as this approach allows for the collection of a large number of treatment effect estimates and provides insights into which types of nudges are most effective and under what conditions. (Antinyan and Asatryan 2019)
>> Addressing Low Power Issues in Political Science Studies
  • Prioritize increasing statistical power in your studies, as the authors find that the median analysis in political science research has only about 10% power, leading to a high likelihood of false negatives and difficulty in replicating findings. (Stanley and Doucouliagos 2022)

  • Prioritize increasing statistical power in your studies, as the authors find that the median analysis in political science research has only about 10% power, leading to a high likelihood of false negatives and difficulty in replicating findings. (NA?)

>> Publication Bias Impact on Elasticity Estimations
  • Carefully consider and correct for both publication and attenuation biases when interpreting estimates of the elasticity of substitution between skilled and unskilled labor, as these biases can lead to overestimation of the true elasticity. (Havranek et al. 2022)

  • Carefully consider and correct for publication bias, as it can significantly affect the estimated elasticity of substitution between capital and labor, potentially leading to an overestimate of the true value. (Gechert et al. 2022)

  • Carefully consider the potential impact of publication bias and study quality on your estimates, particularly when attempting to quantify the Armington elasticity, as the paper demonstrates that these factors can lead to biased estimates and affect the validity of subsequent analyses. (Bajzik et al. 2020)

> Mitigating P-Hacking & Publication Bias

>> Detecting Anomalous Patterns in Test Statistics Distribution
  • Be aware of and account for the potential influence of publication bias and p-hacking when interpreting statistical significance, particularly around traditional thresholds such as 5% and 1%, by employing techniques like the caliper test to detect anomalous patterns in the distribution of test statistics. (Brodeur et al. 2023)

  • Be aware of and account for the potential influence of editorial decisions, such as desk rejections, on the distribution of test statistics in your analyses, particularly around conventional significance thresholds. (Brodeur et al. 2023)

>> Alternative Approaches: RCT & Regression Discontinuity Design
  • Be aware of the potential for p-hacking and publication bias, particularly when using certain inferential methods such as instrumental variables (IV) and difference-in-differences (DID), and consider using alternative methods such as randomized controlled trials (RCT) or regression discontinuity design (RDD) to mitigate these issues. (Brodeur, Cook, and Heyes 2020)
>> Pre-Analysis Plans (PAPs) for Reducing P-Hacking
  • Use pre-analysis plans (PAPs) instead of basic pre-registration to effectively reduce p-hacking and publication bias in Randomized Controlled Trials (RCTs). (Brodeur et al. 2022)

  • Always include a pre-analysis plan (PAP) in your pre-registration efforts to effectively reduce p-hacking and publication bias. (Karlan et al. 2016)

  • Include a pre-analysis plan (PAP) in your pre-registration to effectively reduce p-hacking and publication bias, as merely pre-registering without a PAP does not offer significant protection against these issues. (NA?)

>> Data Sharing Policies & Their Impact on P-hacking
  • Consider the role of data-sharing policies and data type in reducing p-hacking and publication bias, although the paper found no evidence that requiring authors to share your data at the time of publication or using harder-to-access data types reduces p-hacking. (Brodeur, Cook, and Neisser 2024)
>> Instrumental Variables & DID Susceptibility to P-Hacking
  • Carefully consider the impact of p-hacking when using instrumental variables and difference-in-difference designs, as these methods appear to be more susceptible to such practices than randomized controlled trials and regression discontinuity designs. (Brodeur, Cook, and Heyes 2022b)

> Improving Reliability through Transparent Research Design

>> Addressing Common Pitfalls in Statistical Inference
  • Avoid selective reporting of placebo tests, particularly those with statistically significant results in the same direction as the main hypothesis, as it biases the evidence in favor of the research designs validity. (Dreber, Johanneson, and Yang 2023)

  • Be wary of relying solely on p-values for statistical inference due to your inherent limitations and potential misinterpretations, and instead consider incorporating Bayesian approaches, such as the minimum Bayes factor, to improve the robustness and transparency of your findings. (HARVEY 2017)

  • Be aware of the potential for “inflation” in the distribution of published test statistics, where a large residual exists that cannot be explained solely by selection processes, and that this inflation may be due to researchers manipulating your results to obtain marginally significant results. (Brodeur et al. 2016)

  • Exercise caution when interpreting statistically significant results, especially those with small effect sizes, due to the potential for high false discovery rates caused by low power and the possibility of multiple testing. (Benjamini and Hochberg 1995)

>> Pre-registration, Documentation, & Mitigating Data Mining
  • Avoid engaging in p-hacking and publication bias, as evidenced by the pronounced peak in the distribution of z-statistics around the 1.96 threshold for statistical significance, indicating a higher frequency of statistically significant results compared to what would be expected in the absence of such behaviors. (Brodeur, Cook, and Heyes 2022a)

  • Carefully document and justify your hidden decisions during data collection, preparation, and analysis, as these choices can significantly affect the results and conclusions drawn from empirical studies. (Huntington‐Klein et al. 2021)

  • Consider using pre-analysis plans (PAPs) to specify hypotheses and analyses prior to data collection in order to minimize issues of data and specification mining and provide a record of the full set of planned analyses. (“Editorial Statement on Negative Findings” 2015)

>> Addressing Misconceptions & Limitations in Traditional Statistical Approaches
  • Embrace variation and uncertainty, avoid the temptation to seek statistical significance as a definitive proof of an effect, and recognize the limitations of peer review and statistical significance testing in ensuring the accuracy of scientific findings. (Gelman 2018)

  • Avoid over-relying on Null Hypothesis Significance Testing (NHST) and instead report Confidence Intervals (CIs) to provide a more comprehensive understanding of your findings. (Fidler et al. 2004)

  • Be aware of the potential for the “winners curse” in scientific publication, whereby the most extreme and spectacular results may be preferentially published, potentially leading to overestimation and distortion of the true relationship being studied. (n.d.)

>> Pre-Analysis Plans: Benefits vs Limitations
  • Consider using pre-analysis plans (PAPs) to reduce the risk of false discoveries caused by data dredging, p-hacking, and HARKing, while allowing for transparency and flexibility in updating the plan as new information arises, thus balancing rigorous research standards with the need for exploration and adaptation. (Magnan 2017)

  • Carefully weigh the benefits of pre-analysis plans against your potential limitations, particularly in situations where replications are feasible, as pre-analysis plans may not significantly improve the reliability of results when multiple hypotheses are tested, null results go unreported, or novel research designs are required. (Coffman and Niederle 2015)

>> Pre-registered Studies for Enhanced Transparency & Reduced Flexibility
  • Prioritize pre-registration of your studies to increase transparency, reduce researcher degrees of freedom, and avoid altering analytical approaches or variable combinations based on initial results. (Kagan, Leider, and Lovejoy 2018)

Misuse and Misinterpretation of p-Values

> Moving Beyond Traditional p-Value Reliance

>> False Positive Risk and Prior Probability Considerations
>> Interpreting and Contextualizing p-Values Correctly
  • Avoid conflating p-values with the probability of a type I error, as p-values indicate the strength of evidence against the null hypothesis, not the likelihood of making a mistaken rejection of the null hypothesis. (Gao 2020)

  • Distinguish between Fisherian significance tests, which yield continuous \(P\)-values interpreted as evidence against the null hypothesis, and Neyman-Pearsonian hypothesis tests, which produce binary decisions based on pre-determined Type I and Type II error rates. (Lew 2019)

  • Recognize the limitations of statistical tests, particularly the misleading nature of binary classifications like statistically significant or non-significant, and instead focus on estimating the size of effects and the uncertainty around those estimates. (Greenland et al. 2016)

  • Avoid making scientific conclusions solely based on whether or not a p-value crosses the 0.05 threshold, and instead take a more holistic view of the evidence that includes the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis. (Wasserstein and Lazar 2016)

  • Avoid misinterpreting p-values as the probability of the null hypothesis being true and recognize that the choice of statistical significance threshold is arbitrary and subject to debate. (Woolston 2015)

  • Recognize the intimate connection between P-values and likelihood functions, and interpret P-values as indices of experimental evidence that quantify the support for hypotheses through the likelihood functions they index. (Lew 2013)

>> Incorporating Prior Knowledge & Alternative Measures
  • Be aware of the limitations of P values as they are only estimates of the probability of obtaining the observed results given a true null hypothesis, and not the probability of the null hypothesis being true given the observed results. Furthermore, P values rely on numerous assumptions and estimates, including the randomness of the sample, absence of bias, and meeting of statistical model assumptions, which can affect your accuracy. Therefore, researchers should consider alternative measures of uncertainty, such as confidence intervals, and incorporate prior evidence (Cohen 2011)

  • Avoid misinterpreting p-values as the probability of the null hypothesis being true, as they do not represent this probability and instead reflect the probability of observing the data or more extreme data given that the null hypothesis is true. (S. Goodman 2008)

  • Consider incorporating Bayesian methods alongside frequentist methods to improve the credibility assessment of new research findings, particularly in situations where prior knowledge is available and can inform the analysis. (Gill, Sabin, and Schmid 2005)

  • Avoid conflating the p-value, which measures the compatibility of data with a null hypothesis, with the posterior probability of the null hypothesis being true, as the former does not directly inform the latter and can lead to misleading interpretations of statistical results. (S. N. Goodman 1999)

  • Interpret statistical tests in light of prior knowledge and the specific context of the study, rather than relying solely on p-values or other statistical measures without considering your limitations and assumptions. (Browner 1987)

>> Improving Confidence Interval Understanding & Utilization
  • Ensure they accurately interpret confidence intervals (CIs) as providing information about the reliability of an estimation procedure, rather than making probabilistic statements about the estimated parameter itself. (Hoekstra et al. 2014)

  • Avoid over-reliance on p-values and instead consider a range of inter-related findings, along with careful consideration of the limitations and variability of p-values, to accurately interpret trial results and improve the validity and repeatability of scientific findings. (Mudholkar and Chaubey 2009)

  • Prioritize presenting results with narrow confidence intervals over those with low p-values, as the former are more robust to random error and therefore more reliable indicators of statistical stability. (Poole 2001)

>> Advocating for Bayes Factors over p-Values
  • Consider using Bayes factors instead of p-values to evaluate evidence in your studies, as Bayes factors take into account prior knowledge and the specific alternative hypothesis being tested, while p-values do not and can therefore be misleading. (Katki 2008)

  • Consider adopting the Bayes factor as an inferential tool, which measures the strength of evidence in favor of a hypothesis compared to an alternative hypothesis, rather than relying solely on P-values, which can lead to misinterpretations due to your dependence on sample size and lack of consideration of prior evidence. (SOBER 2004)

>> Alternatives and Improvements to Null Hypothesis Significance Testing
  • Consider the sample size when interpreting p-values, as the probability of rejecting the null hypothesis increases with larger sample sizes, leading to potential false discoveries. To mitigate this issue, the authors suggest using an exponential function to model the relationship between p-values and sample size, allowing for better detection of meaningful differences in large datasets. (Gómez-de-Mariscal et al. 2021)

  • Prioritize using estimation plots over traditional null-hypothesis significance testing (NHST) because estimation plots provide a more transparent, precise, and comprehensive representation of the data, allowing for more informed and nuanced conclusions. (Ho et al. 2018)

>> Alternative Approaches to Evaluate Evidence beyond p-Values
  • Explicitly define a threshold of relevance for your statistical analyses, and base your inferences on the confidence interval for the effect relative to this threshold, rather than relying solely on p-values and null hypothesis significance testing. (Stahel 2021)

  • Utilize the A Priori Procedure (APP) to determine the required sample size before conducting a study, allowing them to confidently estimate population parameters based on sample statistics with specified probabilities of being within acceptable distances of those parameters, eliminating the need for Null Hypothesis Significance Testing, p-values, or Confidence Intervals. (Trafimow 2019)

>> Alternatives and Improvements to Null Hypothesis Significance Testing
  • Carefully distinguish between statistical significance and economic importance, avoid p-hacking and HARKing, and consider using alternative statistical approaches such as Bayesian methods to reduce misinterpretations and improve the robustness of your findings. (Tibugari et al. 2022)

  • Consider using null-hypothesis significance testing (NHST) to make ordinal claims about the direction of an effect, provided that the null hypothesis is plausible and the researcher wants to control error rates in the long run, rather than seeking to determine the probability of a hypothesis being true. (Lakens 2021)

  • Emphasize point estimates and measures of uncertainty, such as confidence intervals, over statistical significance testing and p-values when communicating results to decision-makers. (Imbens 2021)

  • Avoid drawing conclusions solely based on the p-value without taking into account its inherent variability and potential for misinterpretation, especially in the presence of multiple testing and publication bias. (Hirschauer et al. 2018)

  • Prioritize collecting high-quality data through larger sample sizes, reducing measurement error, and employing within-person designs, while avoiding the pitfalls of null hypothesis significance testing and selectively reporting statistically significant results. (Gelman 2017)

  • Avoid seeking certainty in your findings and instead embrace uncertainty by presenting statistical conclusions with appropriate levels of uncertainty rather than as binary outcomes. (Wasserstein and Lazar 2016)

>> Preemptive Considerations for Data Handling to Avoid Spurious Findings
  • Carefully consider all potential analytic decisions before collecting data, including those involving data coding, exclusion, and analysis, since these choices can significantly impact the validity of p-values and lead to spurious findings. (Wasserstein and Lazar 2016)

> Avoiding Common Pitfalls in Statistical Inference

>> Alternatives to Null Hypothesis Significance Testing (NHST)
  • Consider using equivalence or reverse tests instead of traditional significance tests to avoid incorrectly interpreting the absence of evidence as evidence of absence, particularly when the goal is to compare the similarity of treatment effects rather than merely testing for a difference. (Rahnenführer et al. 2023)

  • Carefully choose your statistical tests depending on the type of variables they are working with, and interpret the results cautiously while considering factors like sample size, statistical power, and clinical significance. (Concato and Hartigan 2016)

  • Consider using equivalence tests instead of traditional significance tests to assess whether two treatments produce similar enough effects for practical purposes, rather than focusing solely on whether they are different. (Anderson, Burnham, and Thompson 2000)

  • Focus on identifying plausible alternative hypotheses before interpreting the results of statistical tests, rather than solely seeking to reject the null hypothesis. (NA?)

  • Avoid making decisions based solely on null hypothesis testing, as it can lead to misleading interpretations due to the conflation of statistical and practical significance, and instead focus on estimating the magnitude of effects and your uncertainty through confidence intervals or standard errors. (NA?)

>> Clarifying Differences and Proper Use of Significance Testing
  • Recognize and differentiate between the distinct types of applications of statistical significance tests, particularly between your use in routine decision-making and in communicating uncertainty in specific conclusions, and understand the limitations and proper interpretation of p-values. (D. R. Cox 2020)

  • Report p-values instead of simply stating whether your findings are statistically significant at a predetermined level, as p-values provide more nuanced information and allow readers to make your own decisions regarding acceptable Type I error rates. (Dahiru 2011)

  • Avoid using statistical significance testing alone to evaluate your results, as it can lead to misleading conclusions due to its limitations and potential for misinterpretation. Instead, they recommend using confidence intervals to provide a more comprehensive understanding of the estimated effect size and its precision. (Stang, Poole, and Kuss 2010)

  • Avoid using p-values derived from null-hypothesis significance testing (NHST) due to your dependency on both unobserved data and potentially unknown subjective intentions of the researcher, which can lead to misleading interpretations of statistical evidence. (Wagenmakers 2007)

  • Understand the fundamental differences between Fishers significance testing and inductive inference, which focuses on the strength of evidence against the null hypothesis using p-values, and Neyman-Pearsons hypothesis testing and inductive behavior, which uses alpha and beta error rates to make decisions between two hypotheses, and avoid confusing the two approaches. (Hubbard and Bayarri 2003)

>> Distinguishing between statistical and economic significance
  • Avoid conflating statistical significance with economic significance, as the latter requires a nuanced understanding of the scientific context and the practical relevance of the findings. (Ziliak and McCloskey 2004)

  • Not conflate statistical significance with economic significance, as statistical significance only indicates the likelihood of observing a given effect due to chance, while economic significance depends on the context and relevance of the research question. (NA?)

>> Addressing Publication Bias, Data Fishing, Multiple Comparisons, and Meta-Analysis
  • Supplement traditional meta-analysis with additional analyses to address the file drawer problem, reduce vulnerability to criticism of individual studies, and draw more informative conclusions when using probability poolers to combine independent p-values. (McCauley and Christiansen 2019)

  • Carefully account for potential sources of variability in your data analysis methods, especially when these methods depend on the observed data, to avoid inflation of Type I error rates due to multiple comparisons. (Gelman and Loken 2014)

  • Consider adopting comprehensive but non-binding registration of your analysis plans to reduce the risk of fishing for statistically significant results, thereby improving the reliability and credibility of published research. (Humphreys, Sierra, and Windt 2013)

  • Report a comprehensive set of results rather than cherry-picking statistically significant or otherwise favorable findings, in order to minimize the risk of introducing publication bias in situ (PBIS) and ensure the validity of the overall body of literature. (Phillips 2004)

>> Exploratory vs Confirmatory Studies & Correct Interpretations
  • Carefully consider the appropriateness of using hypothesis tests in epidemiology, especially when they are used informally, and if they choose to use them, they should recognize the limitations of p-values and consider using alternative methods such as Bayesian analysis or confidence intervals. (Gralinski and Menachery 2020)

  • Carefully define the familywise error rate in terms of different tests of the same hypothesis, rather than tests of multiple hypotheses, to avoid losing the meaning of p-values in exploratory analyses. (Rubin 2017)

  • Differentiate between exploratory and confirmatory studies, recognizing that statistical inferences drawn from exploratory analyses lack evidential impact due to the inherently flexible nature of data collection and analysis procedures employed in such designs. (Groot 2014)

>> Improving Interpretations Beyond Traditional Null Hypothesis Significance Testing
  • Carefully define the smallest substantively meaningful effect size, denoted as m, and then use two one-sided tests (TOST) or confidence intervals to determine whether the observed effect falls within the negligible range (-m, m), rather than relying solely on the absence of statistical significance to infer a negligible effect. (Rainey 2014)

  • Avoid interpreting \(P\) values as definitive evidence for or against a hypothesis, as they can be misleading due to your sensitivity to sample size, prior information, and multiple comparisons. Instead, researchers should consider using more robust statistical measures, such as Bayesian methods that incorporate prior information, to make more informed conclusions. (Gelman 2013)

  • Always visually inspect your data through plots and graphs before interpreting the p-value, as different relationships between variables can yield similar p-values, leading to potentially misleading conclusions. (Hewitt, Mitchell, and Torgerson 2008)

>> Assessing Validity through Placebos and Initial Hypotheses
  • Ensure your placebo tests are informative by checking whether a failing result is more likely if the research designs assumptions are violated than if they hold, which requires additional assumptions beyond those used in the original research design. (Eggers, Tuñón, and Dafoe 2023)

  • Structure your tests of design so that they positively demonstrate that the data is consistent with your identification assumptions or theory, starting with the initial hypothesis that the data is inconsistent with a valid research design and only rejecting this hypothesis if they provide sufficient statistical evidence in favor of data consistent with a valid design. (Hartman and Hidalgo 2018)

> Emphasis on Practical Significance Over Statistical Significance

>> Confidence Intervals for Demonstrating Practical Significance
  • Prioritize interpreting the practical significance of your results over merely focusing on statistical significance, as demonstrated through the use of confidence intervals and other relevant statistical measures. (Aarts, Winkens, and Den Akker 2011)
>> Beyond p-values: prioritizing practical significance and effect size
  • Primarily focus on estimating the magnitude and precision of treatment effects, rather than relying solely on P-values and hypothesis testing, which can provide limited and potentially misleading information. (Mark, Lee, and Harrell 2016)

  • Prioritize careful experimental design and consider statistical tests as only one tool among many for drawing conclusions, recognizing that statistical tests alone cannot establish causality or clinical significance. (Shao and Feng 2007)

> Effect Size Interpretation and Error Mitigation Strategies

>> Promoting Effect Sizes Over P-Values for Practical Significance
  • Report effect sizes alongside p-values because effect sizes convey the practical significance and magnitude of the results, while p-values alone only indicate statistical significance without providing information about the size or direction of the effect. (G. M. Sullivan and Feinn 2012)

  • Prioritize interpreting effect sizes over relying solely on p-values, especially in studies with large sample sizes where even minuscule effects can become statistically significant while remaining practically negligible. (Sainani 2012)

> Beyond p-values: holistic interpretation of statistical evidence

>> Holistic Interpretation of Evidence beyond p-values
  • Avoid basing scientific conclusions or policy decisions solely on whether a p-value passes a specific threshold, and instead consider a broader range of factors including study design, data quality, related prior evidence, plausibility of mechanism, real-world costs and benefits, and other measures of evidence such as effect size estimates, confidence and prediction intervals, likelihood ratios, or graphical representations. (Bonovas and Piovani 2023)
>> Statistical vs Clinical Significance: Interpreting Study Results
  • Carefully distinguish between statistical and clinical significance when interpreting study results, as statistically significant findings may not always translate to meaningful or beneficial outcomes for patients. (Kul 2014)

> Contextualizing Significance Levels and Interpreting Results

>> Statistically significant results do not always imply practical importance
  • Avoid drawing strong conclusions solely based on differences in statistical significance levels, as these differences may not reflect meaningful distinctions in the underlying population parameters. (Gelman and Stern 2006)
>> Comparing RCTs and Observational Studies: Limitations and Biases
  • Not automatically assume that randomized controlled trials are superior to observational studies, as discrepancies between the two designs may be due to differences in study populations or other sources of bias, rather than inherent flaws in the observational design. (Rothman 2014)
>> Beyond p-values: Effect Sizes, Mechanisms, Replication
  • Avoid relying solely on p-values for decision-making and instead report effect sizes and justify your chosen a priori significance levels to ensure accurate interpretation of findings and prevent both Type I and Type II errors. (Aguinis, Vassar, and Wayant 2019)

  • Utilize the unique advantages of properly designed and executed clinical trials, such as protocolization, scientific review, prospective data collection, and randomization, to ensure accurate statistical inference and avoid common pitfalls associated with observational data and poor statistical training. (Cook et al. 2018)

  • Move beyond relying solely on p-values and instead incorporate multiple factors such as effect sizes, plausible mechanisms, and replication efforts to ensure robust and reliable conclusions. (Gaudart et al. 2014)

>> Customizing Alpha Levels Based on Cost-Benefit Analysis
  • Carefully consider the context-specific costs and benefits of false positives and false negatives when choosing the alpha level for your statistical tests, rather than relying blindly on the traditional threshold of 0.05. (Palesch 2014)
>> Lower P-Value Thresholds for Improved Study Design
  • Consider adopting much lower P-value thresholds (e.g., P < 10^-6), which can lead to better-designed studies with higher power but may also increase bias and reduce clinical relevance of study endpoints. (J. P. A. Ioannidis 2018)
>> Beyond P-values: Importance of Practical Significance
  • Not rely solely on P-values when interpreting statistical significance, as they cannot convey the practical importance of findings and may lead to incorrect conclusions. (Ni 2017)

> Difference-in-Differences Approach in Health Policy Evaluations

> Addressing Common Pitfalls in Causal Inference and Analysis

>> Alternative Approaches for Improving Accuracy and Minimizing Multiple Comparisons
  • Consider utilizing hierarchical Bayesian or empirical-Bayes regression methods, particularly in situations involving multiple exposures or fishing expeditions, as these methods can improve estimation accuracy by incorporating prior information and reducing the impact of multiple comparisons. (Celentano, Platz, and Mehta 2019)

  • Prioritize quantifying associations instead of performing statistical hypothesis tests, especially in observational epidemiology where formal statistical inference is challenging; furthermore, if statistical tests are conducted, researchers should avoid cherry-picking significant findings and consider alternative approaches like empirical Bayes and semi-Bayesian methods to improve accuracy while minimizing the impact of multiple comparisons. (Savitz and Olshan 1998)

>> Addressing Measurement Error and Selection Bias
  • Be aware that selection bias can occur even without collider stratification, particularly when conditioning on a variable that is not a collider and has an unmeasured common cause with the outcome, leading to potential overestimation of the true causal effect. (Hernán 2017)

  • Carefully consider and account for potential sources of measurement error in your causal diagrams, including dependent and differential measurement errors, as failure to do so can lead to biased estimates of the relationship between exposures and outcomes. (Hernan and Cole 2009)

Advanced Regression Techniques for Improved Accuracy

> Variable Selection Strategies for High Dimensional Data

>> Multi-Split Methods for Robust and Reproducible Results
  • Consider using multi-sample splitting methods for conducting hypothesis tests in high-dimensional settings, as they provide approximately reproducible and robust p-values compared to traditional single sample splitting approaches. (Dezeure et al. 2015)

  • Use a multi-split method when conducting high-dimensional regression analysis to ensure robustness and reproducibility of results, as it provides asymptotic control over the inclusion of noise variables while improving power and reducing the number of falsely selected variables compared to traditional methods. (Meinshausen, Meier, and Bühlmann 2008)

>> Innovative Penalties & Algorithms for Robust Variable Selection
  • Consider using the Thresholded EEBoost (ThrEEBoost) algorithm for variable selection in high-dimensional data analysis because it enables exploration of a greater diversity of variable selection paths, potentially leading to models with lower prediction error compared to traditional methods. (Speiser et al. 2019)

  • Consider using the Orthogonalizing Expectation Maximization (OEM) algorithm for penalized regression analysis in situations involving “tall” data sets, where the number of observations far exceeds the number of variables, as it offers significant computational advantages compared to other methods. (Huling and Qian 2018)

  • Consider using a Bayesian approach to implement the elastic net regularization method, as it provides a natural way to incorporate uncertainty into estimates, allows for simultaneous selection of penalty parameters, and addresses the issue of double shrinkage present in traditional implementations. (Q. Li and Lin 2010)

  • Consider using an adaptive false discovery rate (FDR) controlling procedure for multiple testing in model selection, which can address both high and low proportions of true hypotheses among the tested ones and improve upon existing methods that have limitations in non-sparse models. (Benjamini and Gavrilov 2009)

  • Consider using penalized likelihood methods for variable selection in statistical analysis, as these methods offer advantages over traditional stepwise selection procedures by being computationally efficient, accounting for stochastic errors in the variable selection process, and providing a unified framework for simultaneous variable selection and parameter estimation. (Fan and Li 2001)

  • Consider adopting a Bayesian framework for penalized regression techniques, such as the lasso, due to its ability to provide valid standard errors and overcome issues associated with frequentist approaches, while maintaining comparable or superior prediction performance. (NA?)

>> Regularization Methods for Consistent Variable Selection
  • Consider using the adaptive lasso instead of the standard lasso for variable selection, as the adaptive lasso provides both consistent variable selection and optimal prediction while avoiding the pitfalls of overfitting caused by including too many noise features. (Zou 2006)

  • Consider using the adaptive lasso instead of the standard lasso for variable selection, as the adaptive lasso provides both consistent variable selection and optimal prediction while avoiding the pitfalls of overfitting caused by including too many noise features. (NA?)

>> Risk Inflation Approach for Model Parsimony vs Predictive Power
  • Evaluate your variable selection procedures using the risk inflation criterion, which compares the maximum possible increase in risk of the chosen estimator versus the ideal estimator that knows the correct predictors, as this approach encourages parsimonious models while still allowing for predictive improvement over simply including all predictors. (Kliemann 1987)
>> Sparse Modeling via Penalized Contrast Estimation
  • Use a specific form of penalty function when performing model selection through the minimization of a penalized least-squares type criterion within a Gaussian framework. Specifically, the authors recommend using a penalty function of the form pen(m) = K?2Dm(1+?2Lm)2, where Lm are nonnegative weights that satisfy certain conditions, in order to obtain bounds of the form (3) with an appropriate value of C. (Birgé and Massart 2006)

  • Employ a minimum penalized empirical contrast estimation method for model selection, which involves choosing the model that minimizes the sum of the empirical contrast function and a penalty term proportional to the ratio of the number of parameters needed to describe the model to the number of observations. This approach ensures that the estimators quadratic risk is bounded by an index of the accuracy of the sieve, which balances the trade-off between approximation error and parameter dimension relative to sample size. (Barron, Birgé, and Massart 1999)

> Nonlinear Estimation & Adaptive Thresholding for Function Estimation

>> Nonlinear Estimation & SureShrink Algorithm for Besov Spaces
  • Employ nonlinear estimation methods instead of traditional linear methods when dealing with Besov and Triebel scales, particularly when the parameter \(p\) is less than 2, as these scales exhibit spatial variability that cannot be adequately captured by linear estimators. (D. L. Donoho and Johnstone 1998)

  • Employ nonlinear estimation methods instead of traditional linear methods when dealing with Besov and Triebel scales, particularly when the parameter \(p\) is less than 2, as these scales exhibit spatial variability that cannot be adequately captured by linear estimators. (D. L. Donoho and Johnstone 1998)

  • Consider using the SureShrink method for estimating functions from noisy data, as it employs an adaptive thresholding technique based on the Stein Unbiased Estimate of Risk (SURE) principle, allowing for simultaneous near-minimax optimization over a wide range of Besov spaces, while being computationally efficient with an order N log(N) complexity. (D. L. Donoho and Johnstone 1995)

>> Wavelets vs COSSO for Density Estimation & Model Selection
  • Consider using wavelet threshold estimators for density estimation because they offer nearly optimal performance across various global error measures and function spaces, including Besov spaces, and can effectively handle both dense and sparse data. (NA?)

> Robust Estimation Techniques for Handling Outliers

>> Residual Adjustment Function Selection for Efficiency & Robustness
  • Pay close attention to the choice of your residual adjustment function (RAF) when conducting statistical analysis, as the shape of the RAF determines the efficiency and robustness properties of the corresponding estimator, and can lead to significant differences in performance, particularly in the presence of outliers or contamination. (Kliemann 1987)

  • Pay close attention to the choice of your residual adjustment function (RAF) when conducting statistical analysis, as the shape of the RAF determines the efficiency and robustness properties of the corresponding estimator, and can lead to significant differences in performance, particularly in the presence of outliers or contamination. (NA?)

>> Alternative Scale Measures & High Breakdown Point Regressions
  • Consider alternative measures of scale beyond the traditional median absolute deviation (MAD), specifically the estimators Sn and Qn, which offer higher efficiency and do not rely on assumptions of symmetry. (Rousseeuw and Croux 1993)
>> Robust Statistics for Small Sample Sizes & Outlier Mitigation
  • Carefully consider the potential impact of outliers on your statistical analyses, particularly when working with small sample sizes, and employ robust methods such as the median and trimmed mean to mitigate the effects of extreme values. (Dixon 1953)

> Robust Distance Measures for Outlier Detection in Multivariate Data

>> Robust Distances for Low Observations-to-Dimensions Ratio
  • Use robust distance measures instead of classical Mahalanobis distances to effectively detect outliers in multivariate datasets, especially when the ratio of observations to dimensions is low. (Rousseeuw and Zomeren 1990)

  • Use robust distance measures instead of classical Mahalanobis distances to effectively detect outliers in multivariate datasets, especially when the ratio of observations to dimensions is low. (NA?)

> Nonparametric Regression Models: Optimization & Robust Estimation

>> Local Linear Smoothers for Enhanced Data Variation Capture
  • Consider using a two-step estimation procedure when dealing with varying coefficient models, especially when some coefficients may be smoother than others. This approach can improve the accuracy of estimates compared to a one-step procedure, even when the optimal bandwidth is unknown, while remaining robust to the choice of initial bandwidth. (Fan and Zhang 1999)

  • Consider using local linear least squares kernel estimators instead of traditional kernel estimators because they offer similar simplicity and consistency, but provide more accurate estimates due to your ability to capture local variations in the data through the use of a bandwidth matrix. (Kliemann 1987)

  • Consider using local linear least squares kernel estimators instead of traditional kernel estimators because they offer similar simplicity and consistency, but provide more accurate estimates due to your ability to capture local variations in the data through the use of a bandwidth matrix. (NA?)

>> Optimal Bandwidth Selection & Robust Rank Correlation
  • Use the observable window \(\widehat{h}_{e}\) instead of attempting to estimate the unobservable optimal window \(h_{0}\), when selecting the bandwidth parameter for kernel density estimation, because \(\widehat{h}_{e}\) performs just as well as \(h_{0}\) to both first and second order, while being attainable. (Hall and Marron 1987)

  • Consider using a robust and simple estimator of the regression coefficient based on Kendalls rank correlation tau, especially when dealing with data containing outliers or heavy tails, as it offers improved performance over traditional least squares methods. (Sen 1968)

>> Optimal Bandwidth Selection
  • Correct for the bias in the least squares estimator and use a cross-sectional dependence robust variance estimator when working with interactive fixed effects models in the presence of cross-sectional dependence. (Alsan and Goldin 2019)

  • Consider using smoothed cross-validation (SCV) for selecting the bandwidth of a kernel density estimator, as it combines the intuitive appeal of traditional cross-validation with improved stability due to presmoothing of the data, leading to superior performance compared to alternative methods. (Strassen 1964)

> Variable Bandwidth Local Linear Smoothers for Regression

>> Optimized MISE Reduction via Adaptive Bandwidth Selection
  • Use a local linear smoother with a variable bandwidth to estimate regression functions, as it provides flexibility in smoothing different types of functions, reduces boundary effects, and optimizes performance through minimizing the mean integrated squared error (MISE). (Kliemann 1987)

  • Use a local linear smoother with a variable bandwidth to estimate regression functions, as it provides flexibility in smoothing different types of functions, reduces boundary effects, and optimizes performance through minimizing the mean integrated squared error (MISE). (NA?)

> Nonparametric & Flexible Regression Approaches for Complex Data

>> Product Splines, Piecewise Polynomials, and Bayesian P-Splines
  • Consider using Bayesian P-splines for modeling nonlinear smooth effects of covariates within the generalized additive and varying coefficient models framework, as it allows for simultaneous estimation of smooth functions and smoothing parameters, extension to more complex formulations like mixed models with random effects, and local adaptation of smoothing parameters to handle changing curvature or highly oscillating functions. (Lang and Brezger 2004)

  • Consider using a Bayesian framework for curve fitting, specifically employing a piecewise polynomial model with an unknown number and position of knots, and utilize a reversible jump Markov chain Monte Carlo method to estimate the posterior distribution. (Denison, Mallick, and Smith 1998)

  • Consider using a flexible nonparametric regression modeling approach, specifically the proposed method of product spline basis functions, to accurately capture complex relationships in high-dimensional datasets while maintaining interpretability and computational efficiency. (Kliemann 1987)

  • Consider using a flexible regression modeling technique that employs product spline basis functions, allowing for automatic determination of the number of basis functions, product degree, and knot locations, resulting in a powerful and flexible tool for modeling relationships that are nearly additive or involve interactions in at most a few variables. (NA?)

>> Optimal Smoothing Parameter Selection in GAMs
  • Carefully consider the choice of smoothing parameter when using Generalized Additive Models (GAMs), as it impacts the tradeoff between model complexity and fit, and can be informed by techniques like generalized cross-validation (GCV). (Simon N. Wood 2003)

  • Carefully consider the choice of smoothing parameter when using Generalized Additive Models (GAMs), as it impacts the tradeoff between model complexity and fit, and can be informed by techniques like generalized cross-validation (GCV). (NA?)

>> Robust Locally Weighted Regression for Scatterplot Smoothing
  • Consider using locally weighted regression (LOESS) as a flexible and powerful tool for estimating complex relationships in your data, especially when traditional parametric approaches may be too restrictive or miss important patterns. (Cleveland and Devlin 1988)

  • Consider using robust locally weighted regression as a method for smoothing scatterplots, which involves fitting a polynomial to the data using weighted least squares, where the weight for each observation is based on its proximity to the target point, and guarding against deviant points distorting the smoothed points through the use of a robust fitting procedure. (Cleveland 1979)

>> Nonlinear Dimensionality Reduction via Principal Curves
  • Consider using principal curves as a nonlinear extension of traditional principal components analysis, which provide a smooth one-dimensional representation of high-dimensional data sets while preserving the inherent structure of the data. (Hastie and Stuetzle 1989)

> Statistical Tests & Estimation Strategies for Complex Data

>> Subsampling Distribution Estimation for Stationary Time Series
  • Consider using subsample values to estimate the sampling distribution of your statistic when the underlying population distribution is unknown or complex, particularly in the context of stationary time series or homogeneous random fields, as long as certain assumptions regarding the behavior of the statistic and the subsampling procedure are met. (Kliemann 1987)

  • Consider using subsample values to estimate the sampling distribution of your statistic when the underlying population distribution is unknown or complex, particularly in the context of stationary time series or homogeneous random fields, as long as certain assumptions regarding the behavior of the statistic and the subsampling procedure are met. (NA?)

>> Bias Correction & Bootstrapping for Nonlinear Fixed Effect Models
  • Use bootstrapping methods to estimate the distribution of maximum-likelihood estimators in fixed-effect models, as these methods accurately capture the asymptotic bias and allow for the construction of valid confidence intervals without requiring explicit bias correction. (Higgins and Jochmans 2022)

  • Consider using second-order bias-correction techniques when dealing with incidental parameters problems in nonlinear panel models with fixed effects, as these techniques offer improved accuracy by reducing the bias of the log-likelihood estimate from O(T^-1) to O(T^-3), leading to more reliable inferences. (Dhaene and Sun 2021)

>> Asymptotic Theory Application on Segmented Regression Models
  • Ensure the identification of the true regression under the null hypothesis before applying the asymptotic theory of Wilks and Chernoff to analyze the log likelihood ratio statistic in segmented regression models. (Kliemann 1987)
>> Detecting Multimodality with Dip Test
  • Consider using the dip test, which measures the maximum difference between the empirical distribution function and the best fitting unimodal distribution, as a way to detect multimodality in data. (Kliemann 1987)
>> Optimal and Robust Hypothesis Testing Approaches
  • Consider using a valid p-value, $p_{

eta}$, which is defined as the supremum of a valid p-value function, \(p( heta)\), within a pre-specified confidence set, $C_{

eta}$. This approach avoids the need to calculate the supremum over the entire parameter space, reducing computational complexity while maintaining statistical rigor. (Elliott, Müller, and Watson 2015)

  • Carefully consider the concept of optimality when conducting hypothesis testing, taking into account factors such as sample size, type I and II errors, and the specific characteristics of your data and research question. (Romano, Shaikh, and Wolf 2010)

  • Consider using the Truncated Product Method (TPM) for combining P-values across multiple tests, particularly when the total number of tests is large, due to its desirable statistical properties and computational feasibility. (Zaykin et al. 2002)

  • Consider using generalized test variables that satisfy specific requirements (e.g., being free of nuisance parameters and stochastically increasing in the parameter of interest) to develop tests with computable \(p\) values that are robust to nuisance parameters. (Tsui and Weerahandi 1989)

>> Nonparametric Goodness-of-Fit Tests & Asymptotics for Structural Change
  • Utilize computationally efficient approximations to calculate asymptotic p-values for structural change tests, specifically by employing a weighted loss function over the p-value space to fit a polynomial model to the distribution of interest. (B. E. Hansen 1997)

  • Consider using Empirical Distribution Function (EDF) statistics for goodness-of-fit tests, especially when the underlying distribution is continuous and fully specified, as they offer greater statistical power compared to traditional chi-square tests while being relatively easy to compute. (NA?)

>> Nonparametric Hypothesis Testing & Noniterative Linear Model Fitting
  • Consider using the generalized likelihood ratio method for testing nonparametric hypotheses, as it offers a flexible and powerful approach that can be applied to a wide range of statistical models, and it inherits the desirable Wilks phenomenon, which ensures that the asymptotic null distribution is independent of nuisance parameters and nearly chi-squared distributed. (Fan, Zhang, and Zhang 2001)

> Improving Estimation & Prediction via Asymptotics, Variance, & Influence

>> Asymptotic Representation of Hypergeometric Functions with Latent Roots
  • Consider utilizing asymptotic representations for hypergeometric functions involving latent roots and matrix variates, as these representations provide insights into how sample and population latent roots interact and offer potential solutions to various statistical problems through the use of simpler functions or computable approximations. (Kliemann 1987)
>> Improving Two-Step GMM Estimates with Small Sample Corrections
  • Correct for the additional variability introduced by estimated parameters when calculating the variance of two-step GMM estimators in small samples, as this can lead to substantially improved accuracy in statistical inference. (Windmeijer 2000)

  • Correct for the additional variability introduced by using estimated parameters in the weight matrix when calculating the efficient two-step GMM estimator, especially in small samples, to obtain more accurate estimates of the variance and improve the reliability of statistical inferences. (NA?)

>> Robust Efficiency & Noise Mitigation in High Dimensions
  • Account for the presence of an “extra Gaussian noise” component in high-dimensional settings, which arises due to the interdependence between parameter estimates and affects the efficiency of M-estimators compared to classical maximum likelihood estimates. (D. Donoho and Montanari 2015)

  • Consider using the Heteroskedasticity Robust Fuller (HFUL) estimator when dealing with many instruments and heteroskedastic data, as it offers high asymptotic efficiency, ease of computation, and robustness to heteroskedasticity and many instruments, without suffering from the moments problem associated with Limited Information Maximum Likelihood (LIML) estimators. (Hausman et al. 2012)

>> Improving Product Variance Estimates with Exact Formulas
  • Use the exact formula for the variance of the product of two random variables instead of the commonly used approximate formula, especially when the coefficients of variation of the two variables are not small, because the approximate formula tends to underestimate the true variance. (L. A. Goodman 1960)
>> Influential Subsets Impacting Predictions Using Divergence Measures
  • Consider the impact of influential subsets on the prediction of future observations, measured through Kullback-Leibler divergences between predictive densities, rather than solely focusing on estimating parameters in regression analysis. (“Modelling and Prediction Honoring Seymour Geisser” 1996)

> Improving Estimation & Inferences through Robust Methodologies

>> Robustifying Estimators under Model Misspecification and High Dimensionality
  • Consider using a heteroscedasticity-robust variance estimator, specifically the proposed alternative to the commonly used Eicker-White estimator, when working with high-dimensional linear models where the number of covariates is large relative to the sample size. (Cattaneo, Jansson, and Newey 2018)

  • Ensure the regularity condition (3) holds, meaning the sample covariance matrix converges to a nonnegative definite matrix, and use a penalization term (?n) that grows slower than n for square root n-consistent estimation of the coefficients in a LASSO-type estimator. (Fu and Knight 2000)

  • Aim to construct estimators that achieve near-minimax risk while being stable against small errors in the model specification, specifically by ensuring that the estimators risk does not increase significantly when the true parameter space is approximated by a nearby model space. (Birgé and Massart 1993)

>> Quasi-Likelihood Approaches for Complex Data Analysis
  • Use the proposed quasi-likelihood under the independence model criterion (QIC) for generalized estimating equations (GEE) to select the working correlation structure and covariates in your analyses, as it performs well in simulations and is simple to implement. (Pan 2001)
>> Optimizing Model Selection & Prior Information in Linear Regression
  • Carefully define your model and specify your prior information when analyzing time series data, as doing so allows for more accurate and interpretable inferences about the underlying signals. (“Maximum-Entropy and Bayesian Methods in Science and Engineering” 1988)

  • Understand that the F-test used in simple linear regression compares the fit of a model with an intercept and a slope to a model with only an intercept, and assesses whether the additional explanatory power provided by including the slope is significant enough to suggest that it is different from zero, given the observed data and assuming normal errors with constant variance. (Birnbaum 1973)

>> Correcting Chi-Squared Test Statistic for Clustered Data
  • Consider correcting the Pearson chi-squared test statistic for clustered data by accounting for the design effects (deffs) of individual cells and collapsed tables (marginals) to improve the accuracy of statistical inferences. (Kliemann 1987)

  • Consider correcting the Pearson chi-squared test statistic for clustered data by accounting for the design effects (deffs) of individual cells and collapsed tables (marginals) to improve the accuracy of statistical inferences. (NA?)

>> Optimizing Test Statistics for Enhanced Hypothesis Testing
  • Carefully select the rank of the pseudoinverse of the covariance matrix when constructing a Wald statistic for testing hypotheses about smooth components in an extended generalized additive model, as naive choices can lead to poor test performance. (S. N. Wood 2012)

  • Use a scaled Wald statistic along with an F-approximation to its sampling distribution, based on an adjusted estimator of the covariance matrix, to make accurate inferences about fixed effects in small sample settings. (Skene and Kenward 2010)

  • Consider combining dependent p-values using Fishers statistic and its associated scaled chi-square approximation, particularly when dealing with normally distributed data with known or unknown variances, as this approach provides accurate results for a wide range of correlation structures and sample sizes. (Winkler 1981)

>> Robust Regression Models for Unique Data Characteristics
  • Consider using the Sparse Least Trimmed Squares (LTS) estimator when dealing with high-dimensional data containing outliers, as it combines the benefits of both the Lasso and LTS estimators by producing sparse models that are more robust to outliers compared to the Lasso alone. (Alfons, Croux, and Gelper 2013)
>> Optimal Transformations and Spatial Analysis for Enhanced Predictions
  • Consider using spatially indexed functional data analysis techniques when working with geophysical data, as this approach allows for the incorporation of both temporal and spatial dependencies in the data, leading to improved estimation and prediction accuracy compared to traditional methods. (Gromenko et al. 2012)

  • Consider using an iterative optimization procedure called Alternating Conditional Expectations (ACE) to estimate optimal transformations of your data, which can lead to improved statistical power and better understanding of relationships among variables. (Breiman and Friedman 1985)

>> Hierarchical Structures in Variable Selection & Coefficient Estimation
  • Consider imposing hierarchical structures among predictors in variable selection and coefficient estimation, known as structured variable selection and estimation, to improve the accuracy and interpretability of statistical models. (Yuan, Joseph, and Zou 2009)

> Addressing Challenges in Nonlinear Modeling and Estimation

>> Functional vs Structural Relationships in Data Analysis
  • Carefully consider whether your data represents a functional or structural relationship, as this choice impacts the mathematical treatment and inferences drawn about the population versus individual units being studied. (Kliemann 1987)
>> Additive Models for Balancing Flexibility and Interpretability
  • Carefully consider the use of additive models, which strike a balance between flexibility and interpretability, particularly when dealing with complex interactions in multi-variable settings. (Kliemann 1987)
>> Wild Bootstrap for Goodness-of-Fit Tests in Nonparametric Regression
  • Use the wild bootstrap method when conducting goodness-of-fit tests for nonparametric regression models, as it provides consistent estimates of the distribution of the test statistic under the null hypothesis and correctly mimics the conditional expectation of the response variable given the predictors. (Kliemann 1987)
>> Regularization of Reduced Form Estimators for Smooth Functions
  • Address the ill-posed inverse problem caused by noncontinuity of the estimator in the reduced form estimators by focusing on cases where the true structural function belongs to a compact set of sufficiently smooth functions and restricting the estimator to belong to this set. (Freund 1998)

  • Address the ill-posed inverse problem caused by the noncontinuity of the estimator in the reduced form estimators by focusing on the case where the true structural function belongs to a compact set of sufficiently smooth functions and restricting the estimator to belong to this set. (Freund 1998)

>> Bayesian Approaches for Complex Models and Uncertainty Quantification
  • Carefully consider the grouping of parameters, latent variables, and missing observations in the Gibbs sampling algorithm, as highly correlated elements must be in the same updating group to achieve convergence in the MCMC sequence. (Asparouhov and Muthén 2014)

  • Consider using a flexible mixture prior model for estimating effect sizes and false discovery rates, which allows for both parametric and nonparametric modeling of the underlying distribution of effects, and can improve accuracy compared to traditional methods. (Muralidharan 2010)

  • Carefully consider invariance issues when conducting Bayesian linear regression analysis, particularly regarding the choice of prior distributions, as this can impact model selection and the shape of the predictive density. (Gardner, Royle, and Wegan 2009)

  • Consider using a Bayesian semi-parametric approach to the instrumental variable problem, specifically a Dirichlet process prior for the joint distribution of structural and instrumental variable equations errors, as it can improve efficiency compared to standard Bayesian or classical methods when errors are non-normal. (Conley et al. 2007)

  • Consider using Bayesian methods for density regression when dealing with complex relationships involving multiple predictors, specifically employing a nonparametric mixture of regression models with a weighted mixture of Dirichlet processes (WMDP) prior for the uncountable collection of mixture distributions. (Dunson, Pillai, and Park 2007)

  • Incorporate your uncertainty about the presence of outliers in your statistical models through a Bayesian approach, allowing for the possibility that each observation may come from either a “good” run or a “bad” run, and specifying prior probabilities accordingly. (Huelsenbeck and Rannala 2004)

  • Consider utilizing Bayesian approaches to modeling, specifically dynamic Bayesian extensions of the GLM, which offer advantages such as sequential analysis, closed-form updating and predictive distributions, separation of sampling model parameters from system model parameters, and computational simplicity compared to traditional GLM programs. (West, Harrison, and Migon 1985)

  • Consider using Stochastic Search Variable Selection (SSVS) to identify promising subsets of predictor variables in multiple regression models, as SSVS employs a hierarchical Bayes normal mixture model with latent variables to identify subsets with higher posterior probability, and utilizes Gibbs sampling to efficiently sample from the multinomial posterior distribution of possible subset choices. (NA?)

> Covariance Matrix Estimation and Model Selection

>> Covariance Matrix Visualization & Bias Correction
  • Exercise caution when using the inverse Wishart prior for covariance matrices, especially when the true variance is small relative to the prior mean, as it can lead to biased estimates of variance and correlation coefficients. (Alvarez, Niemi, and Simpson 2014)

  • Utilize a four-layered visualization method to explore the properties of covariance matrix distributions, which includes univariate histograms, bivariate scatterplots, three-dimensional scatterplots, and summary statistics like effective variance and dependence. (Banfield and Raftery 1993)

>> Nonparametric Autocovariance & Generalized SPDE Models
  • Consider using a generalized stochastic partial differential equation (SPDE) framework for modeling spatial data, as it allows for greater flexibility in specifying covariance structures compared to traditional approaches like the Matern model, while still providing computational efficiency and ease of extension to nonstationary settings. (Bolin and Lindgren 2011)

  • Consider using a nonparametric estimator of autocovariance that is itself an autocovariance, as this feature allows for simulation studies and reduces integrated squared error compared to other estimators that require stronger assumptions such as isotropy or monotonicity. (Hall and Patil 1994)

>> Non-Traditional Approaches for Smoothness and Variance Reduction
  • Consider using penalized regression techniques in selecting high-dimensional control variates to achieve significant variance reductions in your estimates, rather than relying solely on traditional least squares methods. (South et al. 2023)

  • Consider using Gaussian processes, which are collections of random variables fully specified by your mean and covariance functions, as a flexible tool for incorporating prior beliefs about the desired amount of smoothness in your statistical models. (Goldman and Sloan 1995)

>> Order-Invariant Bayesian Factor Analysis
  • Modify your prior specifications in Bayesian factor analysis to ensure order-invariance, allowing for consistent results regardless of variable ordering. (Leung and Drton 2014)
>> Model Selection
  • Use noninformative priors, specifically the Berger and Bernardo reference prior or the Jeffreys prior, when conducting objective Bayesian inference for the parameters of a multivariate random effects model generalized to elliptically contoured distributions. (Bodnar and Bodnar 2023)

  • Consider using the median probability model (MPM) for variable selection because it provides a fast and accurate approximation of the optimal model, especially in cases with correlated covariates, and can improve predictive accuracy compared to traditional methods. (Barbieri et al. 2021)

>> Shrinkage Priors and Angular Parameterizations for Longitudinal Data
  • Consider using a computationally feasible nonparametric prior distribution to achieve posterior consistency of the factor dimensionality in high-dimensional sparse factor models, specifically the authors propose a spike and slab prior with the Indian buffet process (IBP) that achieves optimal posterior contraction rate of the covariance matrix when the factor dimensionality of the true covariance matrix is bounded. (Ohn and Kim 2022)

  • Consider using a continuous matrix shrinkage prior, specifically the matrix spike-and-slab LASSO prior, to achieve optimal posterior contraction rates for estimating both the entire covariance matrix and the principal subspace in sparse spiked covariance models under various loss functions. (Xie et al. 2022)

  • Consider using the angular parameterization of correlation matrices for longitudinal data, as it allows for direct interpretation of the angles as the inverse cosine of semi-partial correlations, leading to improved estimation and easier implementation of selection and shrinkage priors. (Ghosh, Mallick, and Pourahmadi 2021)

Time Series Analysis & Econometrics

> Fixed Effects Estimation Techniques for High Dimensional Data

>> High Dimensional K-way Fixed Effects Estimation Algorithms
  • Consider using the ppmlhdfe command for fast estimation of Poisson regression models with high-dimensional fixed effects (HDFE), especially when dealing with nonnegative data with potentially many zeros, as it enables robust estimation even in the presence of heteroskedasticity and can be implemented with similar ease as linear regression with HDFE. (Correia, Guimarães, and Zylkin 2020)
>> Fixed Effects Models for Hierarchical Longitudinal Data
  • Use an efficient algorithm based on the Frisch-Waugh-Lovell theorem to estimate two-way fixed effect models, particularly when dealing with large datasets containing high-dimensional fixed effects. This algorithm provides accurate estimates while conserving memory and computational resources, allowing for the estimation of multiple specifications and the consistent estimation of asymptotic variances using standard routines. (Somaini and Wolak 2016)

  • Correct for limited mobility bias when estimating correlations in two-way fixed-effects linear regression models with large dummy encoded factors, as failing to do so can lead to substantial positive biases in variance estimates and negative biases in covariance estimates, potentially changing the sign of the correlation estimate. (Gaure 2014)

  • Consider using fixed-effects estimation methods for the three-way error-components model when dealing with longitudinal data involving multiple levels of hierarchy, such as worker-firm panels, to account for unobserved heterogeneity and avoid biased estimates caused by omitted variable bias. (Andrews et al. 2006)

  • Utilize a memory-efficient decomposition of the design matrix to reduce computational burden when estimating the three-way error component model with high numbers of observations and groups. (NA?)

> Fixed Effects Models in Panel Data Analysis

>> Fixed Effects Considerations in Three-Dimensional, Unbalanced Spatial Panels
  • Carefully distinguish between unbalanced spatial panel data (USPD) caused by genuine unbalancedness (GU) versus missing observations (MORO) in incomplete spatial panel data (ISPD), as failing to do so can lead to biased estimates and incorrect conclusions. (Meng and Yang 2022)

  • Carefully consider the choice between fixed effects and random effects models when analyzing panel data, taking into account the assumptions about the error components and your potential impact on the interpretation of the results. (Hoechle 2007)

  • Carefully consider the choice of fixed effects when working with three-dimensional panel data, as there are many possible combinations of individual and time effects, and the appropriate choice depends on the specific research question and data structure. (Laird and Ware 1982)

>> Dynamic Heterogeneous Panels with Sequential Convergence t-Bar Test
  • Consider using a \(t\)-bar test statistic when analyzing dynamic heterogeneous panels, as it is shown to converge in probability to a standard normal variate sequentially with \(T\) (time series dimension) approaching infinity, followed by \(N\) (cross sectional dimension) approaching infinity, and performs well in small samples compared to existing tests. (Omay and Ucar 2023)
>> Addressing Incidental Parameter Bias in Fixed Effects Estimation
  • Consider using nonlinear two-way fixed effects panel models with individual-specific slopes and nonparametrically specified link functions to better account for unobserved heterogeneity in causal relationships, and that they can do so efficiently through the use of a novel iterative Gauss-Seidel estimation procedure. (D’Haultfœuille et al. 2022)

  • Carefully consider the potential endogeneity of your predictor variables and address it through the use of appropriate statistical methods, such as instrumental variables, when working with short panel data and time-varying linear transformation models with fixed effects. (Honoré, Muris, and Weidner 2021)

  • Use a bias-reduced fixed effects estimator to accurately predict individual effects in fixed effects panel probit models, particularly in short panels where traditional maximum likelihood estimators may produce poor estimates due to significant finite sample bias. (Schupp et al. 2017)

  • Correct for the incidental parameter problem in fixed-effects panel-data models with individual and time unobserved effects using analytical or jackknife bias corrections, especially when the panel dimensions are moderately large, to improve the accuracy of coefficient estimates and avoid severe bias. (Cruz-Gonzalez, Fernandez-Val, and Weidner 2016)

  • Be aware of the incidental parameter bias in fixed effects estimation of panel data models with large T, which arises due to the estimation of many parameters relative to the sample size, and can be corrected using appropriate bias correction methods. (M. Chen, Fernández-Val, and Weidner 2014)

  • Consider using a transformation of the model before applying the Least Squares Interactive Fixed Effects (LS-IFE) estimator to obtain consistent and asymptotically unbiased estimates in panel data models with interactive fixed effects and relatively small T. (“Panel Data Models with Interactive Fixed Effects” 2009)

  • Apply bias corrections to your two-step fixed effects panel data estimators to address the incidental parameters problem caused by the presence of time-invariant and time-varying heterogeneity. (Fernandez-Val and Vella 2007)

  • Not avoid applying nonlinear fixed effects models due to concerns about computational feasibility, as modern computing power allows for efficient estimation of these models even with large numbers of fixed effects. (NA?)

>> Addressing Incidental Parameter Problem in Fixed Effects Models
  • Consider using moment conditions free of fixed effects for inference in multiplicative error models, as these avoid incidental parameter bias and lead to more accurate standard errors compared to traditional pseudo-Poisson approaches. (Jochmans and Verardi 2020)

  • Carefully consider the implications of including multiple fixed effects in binary response panel data models, as traditional approaches may fail to adequately address the incidental parameter problem, leading to biased estimates. (Charbonneau 2014)

  • Consider using a Markov Chain Monte Carlo Conditional Maximum Likelihood (MCMC-CML) estimator for two-way fixed-effects logit models for dyadic data, as it addresses computational issues, is more efficient than existing pairwise CML estimators, and performs well in simulations and real-world applications. (Bartolucci and Nigro 2009)

>> Regularization Techniques for Improved Estimation in Panel Data
  • Combine the Landweber-Fridman regularization technique with the local-within two-ways fixed effect estimator to address the ill-posed inverse problem in nonparametric instrumental regression while controlling for additive two-way fixed effects, leading to improved accuracy and flexibility in handling different panel model specifications. (Monte 2023)

  • Consider using a three-stage estimation procedure for generalized panel data transformation models with fixed effects and additive structures, which involves a regularized sieve method for initial estimation, followed by local polynomial estimation of a one-dimensional smooth function, and finally local linear estimation of the structural functions, to achieve optimal convergence rates, asymptotic normality, and oracle efficiency while avoiding the curse of dimensionality. (Jiang et al. 2021)

> Time Series Modeling Challenges & Improved Estimation Techniques

>> Autocorrelation, Unit Roots, & Truncation Lag Selection
  • Be aware of the non-standard distribution of the estimator and the corresponding t-test when working with autoregressive time series with a unit root, which has implications for statistical inference and hypothesis testing. (Dickey and Fuller 1979)

  • Be cautious when interpreting the autocorrelation functions of residuals from fitted autoregressive integrated moving average (ARIMA) models because they do not follow the expected distribution under the null hypothesis of no autocorrelation, and instead have a complex dependence structure influenced by the model parameters and estimation method. (Box and Pierce 1970)

  • Prioritize methods based on sequential tests over those based on information criteria when selecting the truncation lag in ARMA models for the Said-Dickey test, because the former demonstrate fewer size distortions and comparable power. (NA?)

>> Common Trend Tests via Roots of OLS Coefficient Matrix
  • Use statistical tests to determine the presence of common trends among multiple time series data, which involves analyzing the roots of the OLS coefficient matrix obtained by regressing the series onto its first lag. These tests can help identify the number of shared stochastic trends, autoregressive unit roots, or linearly independent cointegrating vectors in the data. (NA?)
>> Transformations & Advanced Estimators in Time Series Models
  • Carefully consider the implications of transforming multiplicative models into additive ones when introducing disturbances, as this can lead to inconsistencies in the estimation of expectations and potentially violate initial assumptions about the stochastic model. (Letnes and Kelly 2002)

  • Consider using symmetrically normalized GMM estimators instead of traditional GMM estimators when working with panel data models with sequential moment restrictions, as these estimators are asymptotically equivalent to standard GMM but have better finite sample performance, including lower bias and improved behavior when instruments are weak. (NA?)

>> Dimensionality Reduction & Functional Data Approaches in Time Series
  • Consider combining dynamic factor models (DFMs) with functional data analysis (FDA) to simultaneously estimate both functional and time series components in a natural way, while avoiding the intractability of high dimensionality associated with traditional vector autoregression models (VARs) and ensuring economic interpretability of the unobserved factors. (Hays, Shen, and Huang 2012)

  • Use principal components analysis to summarize a large number of predictors into a smaller number of factors when forecasting a single time series, especially when the data follow an approximate factor model, as this approach leads to asymptotically efficient forecasts. (Stock and Watson 2002)

>> Nonparametric Approaches for Handling Incomplete Degradation Signals
  • Employ a nonparametric approach to modeling degradation processes when dealing with incomplete degradation signals, as it enables accurate estimation of the mean and covariance functions without making restrictive assumptions about your shapes. (Zhou, Serban, and Gebraeel 2011)
>> Regression Spectral Matching with Seasonality Consideration
  • Ensure that the spectra of both sides of your regression equations match, including accounting for potential seasonality effects, as failing to do so can lead to inconsistencies in model specifications. (An, Li, and Yu 2015)
>> Seasonality Accounting Methods Equivalence in IV Regression
  • Be aware that different methods of accounting for seasonality in time-series data, such as including seasonal dummy variables or using seasonally adjusted data, yield equivalent results when employing instrumental variable estimation in a linear model with stochastic regressors. (Dzhumashev and Tursunalieva 2019)
>> Seemingly Unrelated Regression Models for Correlated Error Terms
  • Consider using seemingly unrelated regression (SUR) models instead of ordinary least squares (OLS) estimates for separate regressions when they suspect that the error terms across your regression equations are correlated, as SUR models provide more efficient estimates in such cases. (Zhao et al. 2023)

> Unit Root Testing & Stationarity Assumptions in Econometrics

>> Improving Power & Accuracy in Unit Root Testing
  • Consider testing both the null hypothesis of a unit root and the null hypothesis of stationarity in econometric time series analysis, as traditional unit root tests may lack power against relevant alternatives and may lead to incorrect conclusions about the true nature of the data. (Xiao 2001)

  • Consider testing both the null hypothesis of a unit root and the null hypothesis of stationarity in econometric time series analysis, as traditional unit root tests may lack power against relevant alternatives and may lead to incorrect conclusions about the true nature of the data. (NA?)

> Bayesian Approaches for Improved Forecasting and Model Selection

>> Bayesian Prior Knowledge Integration for Macroeconomic Models
  • Utilize Bayesian methods to combine different sources of information, including both sample and nonsample information, in order to improve the precision of inferences in macroeconometric analyses. (“The Oxford Handbook of Bayesian Econometrics” 2011)

  • Consider adopting a Bayesian approach to the estimation of dynamic stochastic general equilibrium models, which allows for the incorporation of prior knowledge and the evaluation of model fit using Bayes factors that compare the weighted-average likelihoods of competing models. (DeJong, Ingram, and Whiteman 2000)

  • Incorporate prior knowledge into your forecasting models using Bayesian vector autoregression to improve accuracy and reduce overfitting. (Litterman 1986)

  • Consider using Bayesian methods for estimating complex multivariate time series models in macroeconomics, as they provide a principled approach to handling over-parameterization through shrinkage and prior information, while also allowing for flexibility in model specification and efficient computation via Markov chain Monte Carlo methods. (NA?)

>> Bayesian Prior Choice for Accuracy and Computational Efficiency
  • Carefully consider your choice of prior distribution when conducting Bayesian inference on a sharp null hypothesis such as the unit root hypothesis, as improper priors like the uniform and Jeffreys prior can lead to biased results. (Schotman and Dijk 1991)
>> Regime Switching Models
  • Consider using a Monte Carlo (MC) method to estimate posterior moments of structural and reduced form parameters in an equation system, as it allows for flexible prior distributions and reduces computational complexity compared to traditional methods. (M. Li et al. 2023)

  • Consider using regime-switching models to capture the heterogeneity of fiscal policy effects across different stages of the business cycle, as they find significant differences in the size of spending multipliers in recessions compared to expansions. (Auerbach and Gorodnichenko 2012)

>> Bayesian Forecasting Techniques with Customized Model Evaluation
  • Use the warpDLM framework for analyzing time series of counts, as it allows for exact, coherent, and recursive updates for filtering, smoothing, and forecasting distributions, unlike other state space models for multivariate count data. (B. King and Kowal 2023)

  • Incorporate specific forecasting goals into your model evaluation criteria, allowing them to make informed decisions regarding model comparison, combination, and selection. (Lavine, Lindon, and West 2021)

> Advanced Techniques for Model Selection and Validation

>> Stylometry for Author Identification
  • Consider using stylometric analysis, which involves statistical methods applied to literary styles, to identify the authorship of disputed texts by examining subtle differences in word usage and grammatical constructions. (Stock and Trebbi 2003)
>> Instrumental Variable Approaches for Dynamic Models
  • Utilize Generalized Instrument Variables (GIV) methods to identify dynamic policy functions in dynamic models with serially correlated unobservables, extending traditional two-step methods to account for econometric endogeneity of state variables. (Berry and Compiani 2022)

  • Consider using a two-stage least squares estimation procedure with internal instruments derived from a control function when dealing with production functions that include both fixed effects and time-varying productivity shocks. (Abito 2020)

>> Considerations for Nonparametric vs Parametric Approaches
  • Carefully consider the underlying data generating process when conducting two-stage analyses, particularly when using nonparametric efficiency estimates in the first stage, as failure to do so can lead to biased results and incorrect inferences. (Simar and Wilson 2007)
>> Model Uncertainty & Robust Decision Making
  • Carefully consider the potential for model misspecification and utilize decision theory frameworks, such as the variational preference approach, to account for ambiguity aversion and make defensible decisions in the face of model uncertainty. (L. P. Hansen and Marinacci 2016)

  • Consider the tradeoff between the amount of information gained and the strength of assumptions made when conducting econometric analysis, as more assumptions can lead to stronger conclusions but decrease the credibility of those inferences. (Tamer 2010)

>> Nonparametric Quantile Tests for Order Restrictions
  • Utilize a two-step approach to conduct a likelihood-ratio test for order restrictions on the conditional quantiles of Y given X, first constructing nonparametric estimators for the conditional quantiles and then developing a test based on your asymptotic distributions, while accounting for the presence of numerous nuisance parameters due to unknown equilibrium selection probabilities. (“Structural Econometric Models” 2013)

  • Employ a novel likelihood-ratio test for order restrictions on the conditional quantiles of Y given X, rather than relying solely on traditional methods such as OLS regression, when working with multiple equilibrium models where the MCS property holds. (Echenique and Komunjer 2013)

> Instrumental Variables Estimation Challenges & Improvements

>> Instrumental Variable Selection & Validation Strategies
  • Consider using the method of principal components to estimate factors that can serve as instrumental variables in regression models with endogenous regressors, as these factors can be both valid and more efficient than observed variables. (Bai and Ng 2010)

  • Carefully consider the trade-offs between the efficiency and exogeneity of instrumental variables, as even minor misspecifications can lead to substantial errors in statistical inferences, particularly in large samples. (Bartels 1991)

>> Improving IV estimation through better practices & methodologies
  • Avoid pretesting on the first-stage F-statistic in instrumental variable (IV) analysis, as it exacerbates bias and distorts inference. Instead, they suggest screening on the sign of the estimated first stage, which reduces bias while maintaining conventional confidence interval coverage. (Angrist and Kolesár 2021)

  • Adopt a higher threshold for instrument strength, use robust tests like the Anderson-Rubin test instead of the t-test, and be cautious of the potential for spuriously inflated power to find false positive effects when using 2SLS in the presence of weak instruments. (Lee et al. 2020)

  • Prioritize the use of sharp instruments over merely strong ones, as sharp instruments enable accurate complier predictions and tight bounds on effects in identifiable subgroups, leading to improved causal inference. (Kennedy, Balakrishnan, and G’Sell 2018)

>> Instrumental Variable Bias Mitigation Strategies
  • Be cautious when interpreting Two-Stage Least Squares (TSLS) estimates with weak instruments, as they may be biased towards Ordinary Least Squares (OLS) estimates, and Limited Information Maximum Likelihood (LIML) estimates may be more reliable in such situations. (NA?)

  • Consider using jackknife instrumental variables estimators (JIVE1 and JIVE2) instead of traditional two-stage-least-squares (2SLS) or limited-information-maximum-likelihood (LIML) estimators when dealing with models that have more instruments than endogenous regressors. These new estimators address the issue of bias towards OLS estimates inherent in 2SLS estimates and offer improved finite-sample properties compared to 2 (NA?)

  • Be aware of the potential for biases and poor performance of the instrumental variable estimator in small samples due to the possibility of a bimodal distribution with infinite moments and a mean that may be closer to the biased OLS estimator than the true parameter value. (NA?)

>> Instrumental Variable Limitations & GMM Solutions
  • Use instrumental variables estimation via the Generalized Method of Moments (GMM) to address endogeneity issues in non-linear models such as logistic regression, which cannot be addressed using traditional two-stage least squares (2SLS) techniques. (Koladjo, Escolano, and Tubert-Bitter 2018)

  • Exercise caution when using instrumental variables (IV) estimation, as the benefits of IV methods in addressing endogeneity may be limited if the chosen instruments are not strongly correlated with the endogenous variable or are themselves partially endogenous. (Larcker and Rusticus 2008)

  • Be aware of the potential for large inconsistencies in instrumental variable (IV) estimates when the instruments explain little variation in the endogenous explanatory variable, even if the correlation between the instruments and the error in the structural equation is weak. Additionally, IV estimates can exhibit significant finite-sample bias in the same direction as ordinary least squares (OLS) estimates, particularly when the R2 between the instruments and the endogenous explanatory variable is low. (Bound, Jaeger, and Baker 1995)

> Instrumental Variables for Nonlinear Models & Endogeneity

>> Nonlinear Treatment Response Models with Covariates
  • Carefully choose instrumental variables (IVs) that meet the “core conditions” of being associated with the exposure, independent of the unobserved factors driving selection, and having a causal effect on the exposure, in order to draw valid causal conclusions from observational studies. (P. S. Clarke and Windmeijer 2012)

  • Leverage the availability of instruments to identify a broad class of nonclassical nonlinear errors-in-variables models with continuously distributed variables using the eigenvalue-eigenfunction decomposition of an integral operator associated with specific joint probability densities, under the assumption that some measure of location of the distribution of the measurement error is equal to zero conditional on the value of the true regressors. (Hu and Schennach 2008)

  • Consider using semi-parametric instrumental variable (IV) estimators for nonlinear treatment response models with covariates, as this approach allows for nonparametric identification and construction of estimators that approximate treatment response functions even under functional form misspecification. (Abadie 2000)

>> Instrumental Variable Quantile Regression for Structural Estimation
  • Utilize instrumental variable quantile regression (IVQR) to estimate structural quantile functions in situations where endogenous variables are present, as IVQR allows for consistent and asymptotically normal estimates while being robust to weak or partial identification. (Chernozhukov, Hansen, and Jansson 2007)
>> Instrumental Variable Modeling with Discrete Outcomes
  • Consider using generalized instrumental variable (GIV) models instead of traditional IV models because GIV models allow for greater flexibility in handling unobserved heterogeneity, including multivariate and non-separable effects, and can lead to sharper identification of causal effects. (Chesher and Rosen 2014)

  • Use instrumental variable models for discrete outcomes carefully, as they are generally set identifying rather than point identifying, meaning that multiple structural functions could explain the same observed data. (“Instrumental Variable Models for Discrete Outcomes” 2010)

>> Nonlinear IV Regression with Dual Formulations & Reproducing Kernel Hilbert Spaces
  • Consider using kernel instrumental variable regression (KIV) as a nonparametric generalization of traditional two-stage least squares (2SLS) for estimating causal effects in situations where the relationships among variables may be nonlinear, as KIV models relationships among variables as nonlinear functions in reproducing kernel Hilbert spaces (RKHSs) and has been proven to converge at the minimax optimal rate for unconfounded, single-stage RKHS regression under certain (Singh, Sahani, and Gretton 2019)

  • Consider using a dual formulation for non-linear instrumental variable regression, which avoids explicit estimation of the conditional expectation or probability and instead frames the problem as a convex-concave saddle-point optimization problem. (Muandet et al. 2019)

Monotonicity and Comparative Statics in Economic Models

> Monotonicity and Comparative Statics Techniques Expansion

>> Monotonicity Assumptions and Alternatives in Complex Systems
  • Carefully distinguish between different types of monotonicity (i.e., type A and type B problems) and consider the appropriate single crossing conditions and quasisupermodularity assumptions required for each type to ensure valid inferences. (Kukushkin 2011)
>> Monotonicity Analysis via Stronger Single Crossing Property
  • Consider using directional monotone comparative statics in function spaces, which involves analyzing the behavior of a system based on the directional relationship between its components, rather than just the overall magnitude of change. This approach allows for more nuanced analysis and can provide insights into the underlying mechanisms driving the system. (Paul and Sabarwal 2023)

  • Consider combining the concepts of concavity and supermodularity when analyzing comparative statics problems in vector spaces, as this approach allows for stronger assumptions about the objective function and weaker constraints on the constraint sets compared to traditional lattice programming techniques. (Acemoglu and Jensen 2013)

  • Utilize ordinal conditions rather than relying solely on traditional assumptions like smoothness, linearity, or convexity when conducting comparative statics analyses, as demonstrated through the development of a theory and methods for comparative statics analysis based on ordinal conditions alone. (Mitsilegas 2012)

  • Look for conditions under which an additively separable objective function satisfies the Milgrom-Shannon single crossing property, specifically when one component allows a monotone concave transformation with increasing differences and is nondecreasing in the parameter variable, and the other component exhibits increasing differences and is nonincreasing in the choice variable. (“Monotone Comparative Statics with Separable Objective Functions” 2010)

  • Consider using the interval dominance order (IDO) instead of the single crossing property (SCP) when analyzing comparative statics, as IDO is a weaker and more flexible condition that still ensures monotonicity of arg maxx?Xfx(x) in s, and applies in settings where SCP fails. (“Comparative Statics, Informativeness, and the Interval Dominance Order” 2009)

  • Utilize a stronger differential version of the single crossing property and argue from first-order conditions to establish strict monotonicity in comparative statics analyses. (Edlin and Shannon 1998)

  • Utilize a stronger differential version of the single crossing property and argue from first-order conditions to establish strict monotonicity in comparative statics analyses. (Edlin and Shannon 1998)

>> Monotonicity and Comparative Statics: Extensions and Applications
  • Consider using supermodular games, which are based on monotone comparative statics and supermodular optimization, to analyze non-cooperative situations where an increase in one players strategy leads to increases in other players strategies. This approach can provide insights into the existence and stability of equilibria, and avoid assumptions associated with traditional comparative statics methods. (“Game Theory” 2010)

  • Consider using monotone comparative statics methods instead of the implicit-function theorem for comparative static analysis, as they offer greater flexibility and ease of use while requiring fewer assumptions, such as differentiability, concavity, and convexity. (Tremblay and Tremblay 2010)

  • Consider using lattice programming techniques and flexible set orders to study comparative statics problems in constrained optimization, particularly when dealing with non-smooth, non-interior, non-convex, or non-unique solutions. (Quah 2007)

  • Consider reparameterizing your optimization problems to achieve monotone comparative statics, which can be done by identifying a vector field indicating the direction of monotonicity in the parameter space and then transforming the problem using this vector field. (Strulovici and Weber 2007)

  • Utilize the concept of “monotone comparative statics” to analyze how changes in exogenous parameters impact endogenous outcomes in models, particularly when dealing with optimization problems. This approach provides ordinal answers regarding whether an increase in a parameter leads to an increase or decrease in the decision variable, while requiring fewer assumptions compared to alternative methods like the implicit function theorem. (NA?)

>> Monotone Comparative Statics vs Calculus-Based Approaches
  • Consider using the monotone comparative statics approach when studying optimization problems indexed by parameters, as it offers several advantages such as making weaker assumptions, providing more transparency, and yielding more concise arguments compared to calculus-based methods. (Ruscitti and Dubey 2015)

  • Consider using the monotone comparative statics approach when studying optimization problems indexed by parameters, as it offers several advantages such as making weaker assumptions, providing more transparency, and yielding more concise arguments compared to calculus-based methods. (Ruscitti and Dubey 2015)

>> Monotonicity Results for Submodular Function Maximization
  • Consider using the proposed monotone comparative statics results for maximizers of submodular functions, rather than relying solely on the classical theory for supermodular functions, especially when studying complex systems involving substitutable goods or services. (Galichon, Hsieh, and Sylvestre 2023)

  • Consider using the concept of unified gross substitutes when analyzing correspondences, as it guarantees inverse isotonicity under certain conditions, which can lead to useful insights about the structure of solutions. (Galichon, Samuelson, and Vernet 2022)

>> Aggregation Games, Strategic Substitute Effects, Supermodular Optimization
  • Consider using the theory of supermodular optimization and games to analyze the comparative statics of strategic market games, particularly when the model involves only one good on each side of the market, as this allows for a straightforward analysis based on familiar notions from classical microeconomic theory such as normality and gross substitutes of goods. (Lahiri 2011)

  • Look for conditions under which the indirect strategic substitute effect does not dominate the direct parameter effect in games with strategic substitutes, as this ensures the existence of a larger equilibrium at a higher parameter value. (Roy and Sabarwal 2010)

  • Leverage the aggregative structure of games, where each players payoff depends on her own actions and some aggregate of all players actions, to derive robust and general comparative static results under considerably weaker conditions than traditional approaches. (Acemoglu and Jensen 2009)

>> Monotonicity Theorem & Indirect Effects Analysis in Complementarities
  • Carefully consider the potential for indirect effects arising from an endogenous competitive environment when analyzing firm-level complementarities, as these indirect effects can refine and potentially reverse the comparative statics obtained in traditional studies of firm-level complementarities in organizational economics. (Gershkov et al. 2021)

  • Consider leveraging the power of the monotonicity theorem of Topkis (1978) to analyze complex systems involving multiple interacting components, especially when those interactions exhibit complementarities and the system can be represented as a lattice. (Arkolakis 2010)

>> Strategic Complementarity Analysis with Novel Approaches
  • Consider using iterative fixed-point comparative statics to analyze the effects of parameter changes in economic systems, particularly those involving strategic complementarities, as this approach allows for unstable equilibria, divergent learning processes, and unordered perturbations. (Balbus et al. 2022)

  • Consider using the provided sufficient condition and algorithm to identify a minimum threshold parameter value at which every old equilibrium becomes strictly smaller than every new equilibrium, allowing for stronger conclusions about comparative statics in games of strategic complementarity. (Chambers, Echenique, and Saito 2016)

  • Incorporate the Correspondence Principle (CP) when conducting comparative statics analyses, as doing so allows them to obtain unambiguous conclusions even without assuming convexity or smoothness of the maps or spaces involved. (2015)

> Monotonicity and Supermodularity in Decision Theory

>> Monotonicity vs Non-monotonicity in Payoff Functions & Policy Analysis
  • Focus on the concavity or convexity of policy functions when studying distributional comparative statics, as long as they understand the conditions under which these policy functions will be concave or convex. (Jensen 2017)

  • Consider relaxing the common assumption of monotonic payoffs in risk analysis, as doing so allows for a broader range of plausible scenarios and leads to more robust conclusions about risk preferences and decision-making. (Hau 2001)

  • Consider using increasing decision rules when dealing with supermodular payoff functions, as these rules generate higher expected values compared to non-increasing rules. (Anscombe and Aumann 1963)

>> Monotonicity and Weak Assumptions in Stochastic Optimization
  • Consider using a novel approach for comparative statics of risk changes that relies on weaker assumptions and instead uses the ranking of simple lottery pairs to establish the comparative statics of risk changes. (Sousa et al. 2019)

  • Consider using log-supermodularity as a tool for deriving comparative statics predictions, particularly when dealing with uncertain environments, as it provides a strong condition for monotonicity that is preserved under integration. (Athey 2002)

  • Utilize the underlying monotonicity structure of the principal-agent problem to conduct comparative statics analysis on the set of optima, requiring only mild assumptions, rather than relying solely on the first-order approach or the linear contracts approach, which impose stronger restrictions. (Jans 1989)

  • Carefully consider the tradeoffs between assumptions about payoff functions and probability distributions when making comparative statics predictions in stochastic optimization problems, as single crossing properties and log-supermodularity are crucial concepts for ensuring the validity of such predictions. (NA?)

  • Consider using the Left-Side Monotone Likelihood Ratio (L-MLR) order instead of the commonly used Monotone Probability Ratio (MPR) order when analyzing non-linear decision models under uncertainty. This is because the L-MLR order provides a more fine-grained comparison of probability distributions, allowing for more precise inferences while still being compatible with the MPR order. (NA?)

>> Monotone Risk Preferences with TP2 Concept
  • Consider using the Totally Positive of Order 2 (TP2) concept to develop a bull and bear market measure and associated ordering between representative investors in markets based on your marginal rate of substitution between equilibrium consumption allocations among possible states, which combines and generalizes the likelihood-ratio-dominance relation between probability prospects of state occurrence and the Arrow-Pratt ordering of risk aversion in expected utility settings. (Horn 2011)
>> Robust Empirical Predictions via Monotone Comparative Statics
  • Utilize the monotone comparative statics approach to generate empirical predictions that are robust to model misspecification and can be tested using ordinal information and nonparametric methods, thereby avoiding reliance on potentially unfounded technical assumptions. (Ashworth and Mesquita 2005)
>> Persuasive Mechanisms & Informativeness in Signaling Games
  • Examine the “crater property” of interim payoffs to determine if coarse-convexity shifts lead to more informative signals being chosen by senders, regardless of the prior. (Curello and Sinander 2022)

  • Consider using persuasion mechanisms, specifically those that manipulate receivers beliefs while allowing senders to fully commit to disclosing all they know and limiting your private information, to achieve desired outcomes in scenarios where sender and receiver preferences differ. (Kamenica and Gentzkow 2009)

> Monotonicity and Non-Standard Constraints in Comparative Statics

>> Relaxing Antisymmetry Assumption and Using Alternate Set Orders
  • Carefully consider the definitions of your fundamental concepts and ensure they align with established terminology, as well as thoroughly check your proofs to avoid missing critical steps. (Kotani et al. 2023)

  • Consider using the i-directional set order, which is a reformulation of the Ci-flexible set order, to study monotone comparative statics in situations where traditional lattice-based approaches are not applicable due to non-standard constraint sets. (Barthel and Sabarwal 2017)

  • Consider relaxing the assumption of antisymmetry in binary relations when conducting comparative statics analysis, allowing for a more flexible framework that can handle constrained optimization problems with nonlinear constraints. (Shirai 2010)

  • Consider relaxing the assumption of antisymmetry in binary relations when conducting comparative statics analysis, allowing for a more flexible framework that can handle constrained optimization problems with nonlinear constraints. (“Knowledge-Based Intelligent Information and Engineering Systems” 2004)

>> Monotonic Preferences with Argmaximum in Ordinal Statistics
  • Use the concept of Argmaximum, which refers to the set of points where a maximum is achieved, rather than focusing solely on the value of the maximum, because Argmaximum is invariant under strictly increasing monotonic transformations of the underlying preference relation, allowing for more robust and generalizable conclusions. (Neyapti 2010)

  • Use the concept of Argmaximum, which is the set of points where a preference relation is maximized over a subset of a universal lattice, to study monotonic behavior in ordinal comparative statistics, particularly focusing on quasi-supermodular preference relations and sublattices. (NA?)

Statistical Techniques Enhance Environmental Research Design

> Statistical Bias Correction & Variable Selection in Ecology

>> Baselining Heavy Metal Concentrations in Estuarine Systems
  • Establish a reliable baseline concentration for heavy metals in estuaries, accounting for various confounding factors like species composition, sampling conditions, season, age of organism, and body part sampled, to enable accurate assessments of pollution load and biological quality indices. (Tomlinson et al. 1980)
>> Correcting Transformation Bias in Log-Transformed Regression Models
  • Be aware of and correct for the bias introduced by the transformation-inversion process when using log-transformed regression models for hydrologic prediction, especially when applying the power function (or sediment rating curve method) to sediment prediction, as significant bias was observed in this context. (Koch and Smillie 1986)

  • Be aware of and correct for the prediction bias that arises when using least-squares, linear regression of log-transformed variables to derive power and exponential models in environmental chemistry and toxicology. (NA?)

> Statistical Models Optimize Water Safety Assessment

>> GPS Data Facilitates Arsenic Exposure Reduction via Well Switching
  • Collect precise location data using GPS technology to enable accurate distance calculations between safe and unsafe wells, allowing for effective promotion of well-switching as a feasible short-term solution to reduce arsenic exposure in affected communities. (Y. Chen et al. 2007)

> Addressing Spatial, Temporal, Confounding, and Unobservable Factors

>> Spatio-Temporal Models for Particulate Matter Exposure Estimation
  • Incorporate both spatial and temporal dimensions in your models to accurately estimate exposure to particulate matter, especially when dealing with large datasets and complex spatial domains. (Paciorek et al. 2009)
>> Time-Varying Effects of Unobservables in Policy Evaluation
  • Carefully consider the potential for time-varying effects of unobservables when evaluating policy interventions, and that the synthetic control method can be a useful tool for addressing this issue. (Bueno and Valente 2019)

> Machine Learning Mitigates Weather Confounds in Air Pollution Studies

>> Machine Learning Removes Weather Effects for Policy Analysis
  • Employ advanced statistical techniques like machine learning algorithms to remove the confounding effects of weather variations when analyzing the impact of policy interventions on air pollution levels. (Cole, Elliott, and Liu 2020)

> Bias from Log Transformation in Model Performance Metrics

>> Bias in KGE Calculation Due to Log Transformation
  • Avoid using log-transformed flows when calculating the Kling-Gupta Efficiency (KGE) or its improved version (KGE) because it introduces numerical flaws and leads to biased evaluation of model performance. (Santos, Thirel, and Perrin 2018)

Statistical Techniques Enhance Forecasting and Analysis

> Statistical Models Optimize Energy Demand & Supply Predictions

>> Statistical Approaches Boost Electric Load Forecasting
  • Incorporate causal inference techniques to explore the causal relationships between external factors and load, which increases the interpretability and robustness of the model, ultimately leading to more accurate load forecasting. (Yang and Shi 2023)
>> Statistical Modeling for Electricity Spot Prices and Market Design
  • Account for the unique features of electricity spot prices, specifically your dependence on the merit order curve, when developing statistical models for forecasting purposes. (Liebl 2013)

> Statistical Solutions for Policy, Energy Management, and Grid Optimization

>> Statistical Humility in Evidence-Based Policy
  • Adopt a more humble approach to evidence-based policy, avoiding excessive generalizations and instead focusing on improving specific interventions, due to the limitations of the credibility revolution toolkit in addressing complex real-world challenges. (Ankel-Peters and Schmidt 2023)

> Addressing Left Truncation & Right Censoring

>> Addressing Data Issues for Valid Failure Time Inferences
  • Incorporate heterogeneous operating conditions into your analyses of product failure times by treating them as a “frailty” - an unobservable random variable that modifies the baseline failure rate function of an individual - in order to accurately predict field failures and plan accelerated life tests. (Ye, Hong, and Xie 2013)

  • Carefully consider the impact of left truncation and right censoring on your data, particularly when working with failure time data, and utilize appropriate statistical methods to address these issues in order to obtain valid inferences. (Hong, Meeker, and McCalley 2009)

Statistical Analysis Tools & Techniques

> Log-Odds Transformation for Binary Outcome Modeling

>> Log-Odds vs Probabilities in Binary Outcome Models
  • Consider using log-odds instead of raw probabilities when modeling binary outcomes, as log-odds provide a more interpretable and mathematically convenient scale for analysis. (“Abstract,” n.d.)

> Statistical Software Packages & Best Practices

>> Stata Data Management
  • Carefully consider the functional form of your regression models, as nonlinear models require extensive post-estimation analysis to accurately interpret the relationships between variables. (Williams 2016)

  • Understand the importance of efficient data storage and handling when conducting statistical analysis using Stata, including knowledge of data types, compression commands, date and time handling, and navigational and organizational tools. (N. J. Cox 2002)

>> Interpretable Statistics via Simulation-based Inference & User-Friendly Tools
  • Consider using the Zelig software package for estimating and interpreting a wide range of statistical models due to its user-friendly interface, unified command syntax, and integration of various tools such as bootstrapping, nonparametric matching, and multiple imputation. (Soukho et al. 2019)

  • Focus on estimating quantities of direct scientific interest instead of merely reporting model parameters, which can be achieved by simulating parameters and computing simulations of the dependent variable based on the estimated model. (Imai, King, and Lau 2008)

  • Consider using the Clarify software, which employs Monte Carlo simulation to transform raw statistical outputs into meaningful results without changing underlying assumptions or requiring new statistical models. (Tomz, Wittenberg, and King 2003)

>> Bayesian Data Analysis with R2WinBUGS & BUGS
  • Consider utilizing the BUGS software package for implementing Gibbs sampling in your statistical analyses, as it enables automatic construction and sampling from full conditional distributions, making it easier to analyze complex datasets compared to traditional programming approaches. (Rue, Martino, and Chopin 2009)

  • Consider utilizing the R2WinBUGS package to facilitate seamless integration of WinBUGS and R for efficient and comprehensive Bayesian data analysis, allowing for automatic generation of data and scripts in a format readable by WinBUGS, batch processing, and easy importation of results back into R for further analysis and visualization. (Sturtz, Ligges, and Gelman 2005)

>> R Commander as User-Friendly Data Analysis Tool
  • Consider using R Commander, a free and user-friendly GUI for R, to facilitate your data analysis needs, particularly if they are beginners or prefer a menu-driven approach over writing code directly, while still having access to advanced statistical methods and visualization tools through its various plugins. (Kilkenny et al. 2009)

> Statistical Methodologies for Complex Data Types

>> Determinantal Point Processes for Spatial Pattern Modeling
  • Consider using determinantal point processes (DPPs) for modeling complex spatial point patterns, as they offer flexibility in incorporating spatial trends, covariate dependencies, and intricate interpoint interactions. These models can be fit using various techniques such as Waagepetersens two-step procedure, minimum contrast, composite likelihood, or Palm likelihood, and implemented easily in R using the spatstat package. (Baddeley and Turner 2005)
>> Intensive Longitudinal Data Analysis using dynr Package
  • Consider utilizing the dynr package when working with intensive longitudinal data that exhibit complex patterns, including regime switches, due to its ability to efficiently estimate a wide array of linear and nonlinear discrete- and continuous-time models under the assumption of linear Gaussian measurement functions. (Pritikin, Rappaport, and Neale 2017)
>> Detrending Method Selection for Tree-Ring Data Analysis
  • Carefully consider the choice of detrending method when analyzing tree-ring data, as different methods may lead to varying estimates of low-frequency variability and subsequent standardization of the data. (Bunn 2008)
>> Statistical Test Selection for Rare/Common Species Ecology
  • Carefully consider your choice of statistical tests and thresholds when analyzing ecological data, particularly when dealing with rare or common species, and ensure that your methods are appropriate for the data type and size. (Baker and King 2010)

> Bayesian Inference Optimization via Automatic Differentiation

>> Automated Variational Inference for Complex Models
  • Consider automating the process of reparameterizing probabilistic programs to improve the efficiency and accuracy of inference algorithms, as demonstrated by the authors successful application of this technique to various models. (Gorinova, Moore, and Hoffman 2019)

  • Consider using a comprehensive compilation scheme to translate Stan programs into generative probabilistic programming languages (such as Pyro) in order to take advantage of the rich set of existing Stan models for testing, benchmarking, or experimenting with new features or inference techniques. (Cusumano-Towner et al. 2019)

  • Utilize automatic differentiation variational inference (ADVI) to enable rapid iteration and exploration of complex probabilistic models, allowing for efficient and accurate estimation of model parameters without requiring manual derivation of algorithms. (Abadi et al. 2016)

  • Consider using the Stan software package for Bayesian inference and optimization due to its flexibility, efficiency, and ability to handle complex models, although it may not be suitable for models involving discrete parameters. (Gelman, Lee, and Guo 2015)

  • Consider using automatic differentiation variational inference (ADVI) for scalable and accurate Bayesian inference, especially when dealing with complex models and large datasets, as ADVI enables automatic determination of an appropriate variational family and optimization of the corresponding variational objective without requiring manual model-specific calculations. (Kingma and Welling 2013)

>> GPU Acceleration for Efficient Bayesian Computations
  • Consider leveraging GPU-accelerated computation for efficient and accurate Bayesian inference, specifically through the use of the OpenCL framework integrated with Stan, which enables significant speedups for complex models involving matrix algebra and likelihood functions. (Češnovar et al. 2019)
>> Shape Semantics & Complex Modeling in TensorFlow Probability
  • Utilize the flexibility of TensorFlow Probability JointDistributions to specify complex probabilistic models using either imperative or declarative styles, leveraging the shared interface for inference algorithms and the ability to easily switch between different model specifications. (Piponi, Moore, and Dillon 2020)

  • Carefully consider the shape semantics of your data when working with probability distributions, particularly distinguishing between sample, batch, and event shapes, to ensure efficient and accurate analysis. (Dillon et al. 2017)

> Python libraries for specialized statistical analyses

>> Python-based Speech Analysis using Parselmouth Library
  • Consider leveraging the power of Python and the Parselmouth library to improve efficiency and expand capabilities when conducting speech analysis, while still acknowledging the importance of citing both Praat and Parselmouth in scientific publications. (NA?)

> Optimizing Data Management & Processing for Large Datasets

>> Optimizing Performance with Realistic Data & Domain-Specific Algorithms
  • Be aware of the significant discrepancies between synthetic benchmark data and real-world RDF datasets, and thus should strive to incorporate more realistic data in your evaluations. (Duan et al. 2011)
>> Leveraging SQL for Efficient Data Management & Analysis
  • Consider leveraging Structured Query Language (SQL) for efficient data management and analysis, particularly when dealing with large datasets, as it allows for easy creation, modification, deletion, and retrieval of data, regardless of its size or structure. (Ikromovna 2023)

> Geochemical Data Management & Analysis with GCDkit

>> Geochemical data analysis using GCDkit
  • Consider utilizing the Geochemical Data Toolkit (GCDkit) for managing and analyzing large geochemical datasets, as it provides a user-friendly interface, powerful statistical and graphical functions, and eliminates routine and tedious operations while ensuring accurate and reproducible results. (NA?)

> Cross Validation Strategy with LETOR 4.0 Datasets

>> 5-Fold Cross Validation Approach Using LETOR 4.0 Dataset
  • Adopt a 5-fold cross validation strategy when using the LETOR 4.0 datasets, which includes dividing the data into five equal parts, using four parts for training, one part for validation during model selection, and then evaluating performance on the remaining held-out test set. (Qin and Liu 2013)

Statistical Methodologies for Improving Research Design

> Addressing p-hacking & promoting robust causal inferences

>> Preventing False Discoveries via Multiple Hypothesis Testing Adjustment
  • Be aware of and address the issue of p-hacking, which refers to the selective reporting of statistically significant results, often achieved through repeated data analysis until a desired outcome is obtained. The paper demonstrates that this practice can lead to false discoveries and offers solutions such as multiple hypothesis testing adjustments and full disclosure of all statistical tests performed. (W. Li, n.d.)
>> Randomized Controlled Trials for Establishing Causality
  • Consider using randomized controlled trials to establish causality, particularly when studying complex phenomena like the impact of e-filing on tax compliance, as it helps overcome endogeneity bias and provides robust estimates of the treatment effect. (Okunogbe and Pouliquen 2022)
>> Transformers for Zero-Shot Learning in Professional Domains
  • Consider using transformer-based language models, specifically GPT-3 or its derivatives, for zero-shot or few-shot learning tasks in professional domains such as finance, law, and accounting, as these models have demonstrated strong performance on various assessments, including the CPA Exam. (Bommarito et al. 2023)

> Finance Machine Learning Algorithms Performance Comparison

>> Large Language Models vs Traditional ML in Finance
  • Consider using large language models like FinBERT for sentiment analysis of financial texts, particularly when dealing with small training samples or infrequent financial words, as it significantly outperforms traditional methods and other machine learning algorithms in these scenarios. (Huang, Wang, and Yang 2023)

> Addressing Observation Error in High Frequency Data Analysis

>> Addressing Measurement Error in Volatility Estimates Using High Frequency Data
  • Account for observation error when working with noisy high-frequency data, as failing to do so can lead to biased estimates of integrated volatility and an overestimation of the variance of the contamination noise. (L. Zhang, Mykland, and Aït-Sahalia 2005)

  • Consider using high-frequency data to construct model-free estimates of daily exchange rate volatility and correlation, which are approximately free of measurement error and can be treated as observed variables, allowing for a more comprehensive understanding of your joint distribution and dynamic behavior. (NA?)

  • Account for observation error when working with noisy high-frequency data, as failing to do so can lead to biased estimates of integrated volatility and an overestimation of the variance of the contamination noise. (NA?)

> Statistical Techniques for Disentangling Information Shocks

>> Disaggregating Monetary Policy & Soft Information Effects
  • Carefully decompose monetary policy shocks into policy changes and contemporaneous information shocks, as failing to do so may lead to biased estimates of the effects of monetary policy on the economy. (Jarociński and Karadi 2020)

> Avoiding Bias and Misinterpretation in Data Analysis

>> Statistical Reality Check for Significance Evaluation
  • Use a statistical “Reality Check” to evaluate the significance of your findings when conducting a specification search, which involves calculating the maximum of a vector of correlated normal random variables under the assumption that the null hypothesis is true, and comparing it to the observed maximum to determine the p-value. (White 2000)

  • Use a statistical “Reality Check” to evaluate the significance of your findings when conducting a specification search, which involves calculating the maximum of a vector of correlated normal random variables under the assumption that the null hypothesis is true, and comparing it to the observed maximum to determine the p-value. (NA?)

> Statistical Modeling Techniques for Sports Data Analysis

>> Bayesian Hierarchical Models for Correlated Sports Data
  • Consider using a Bayesian hierarchical model when analyzing sports data, as it allows for the incorporation of prior information and the natural handling of correlated data, while also providing a straightforward way to make predictions based on the posterior distribution. (Baio and Blangiardo 2010)
>> Advanced statistical methods for analyzing sports data
  • Consider using betting market probabilities as a source of unbiased and efficient estimates of true game probabilities when developing models to compare team strength across different sports leagues. (Lopez, Matthews, and Baumer 2018)

  • Utilize a Poisson-type process model to analyze rare events in hockey, such as goals, which accounts for the duration of each event and the sparsity of goals, while also considering the impact of player abilities on the scoring rates through a Cox process model with regularization techniques such as the Lasso. (Thomas et al. 2013)

> Optimizing Decision Making with Advanced Causal Inference Techniques

>> Optimal Treatment Rules & Efficient Policy Learning
  • Consider using a semi-parametric additive outcome model for policy evaluation and learning under clustered network interference, as it allows for heterogeneous spillover effects and provides more efficient estimates compared to the standard IPW estimator. (Y. Zhang and Imai 2023)
>> Algorithmic Recommendation Evaluation & Bias Mitigation via Causal Inference
  • Employ a Bayesian safe policy learning framework that maximizes the posterior expected value while controlling the posterior expected Average Conditional Risk (ACRisk) to ensure that your algorithms do not yield worse outcomes for specific subgroups of individuals. (Jia, Ben-Michael, and Imai 2023)
>> Asymmetric Counterfactual Utility Functions for Policy Evaluation
  • Account for asymmetric counterfactual utilities when evaluating policies, as these utilities allow for varying utilities across different counterfactual outcomes and can provide a richer understanding of the decision problem compared to traditional utility functions that only rely on observed outcomes. (Ben-Michael, Imai, and Jiang 2022)

> Improving Research Reliability through Better Study Design

>> Enhancing Scientific Research Quality via Robust Practices
  • Consider implementing practices such as large-scale collaborative investigation, replication culture, registration, exchange, reproducibility practices, use of appropriate statistical methods, standardization of definitions and analysis techniques, stricter levels for claiming discoveries or successes, improved study design standards, better communication and dissemination systems, and increased training of scientific workforces in methodology and statistics to improve the reliability and effectiveness of your research. (J. P. A. Ioannidis and Khoury 2014)

> Reporting Both P-Values & Confidence Intervals

>> Complementarity of P-Values & Confidence Intervals
  • Report both p-values and confidence intervals to provide complementary information for evaluating scientific articles, as p-values allow decisions about rejecting or retaining pre-formulated null hypotheses while confidence intervals offer information on effect direction, magnitude, statistical plausibility, and clinical relevance. (Prel et al. 2009)

> Statistical Approaches Addressing Racial Disparities in Policing

>> Causal Inference with Policy Impact Analysis using Multiple Methods
  • Employ multiple methods, including panel data regressions and synthetic control approaches, to ensure robust and reliable estimates of the causal impact of policies, such as right-to-carry laws, on outcomes like violent crime. (Donohue, Aneja, and Weber 2019)
>> Natural Experiments Minimizing Endogeneity Concerns in Policy Intervention
  • Consider using natural experiments, such as a surge in police activity following a high-profile crime, to identify causal effects of policy interventions while minimizing endogeneity concerns. (Braakmann 2022)
>> Statistical methods addressing racial bias in policing
  • Carefully collect, clean, and analyze large-scale administrative data to uncover patterns of potential racial disparities in police stops, while recognizing the limitations of such data and interpreting findings in light of contextual factors. (“Police-Public Contact Survey, 2011” 2014)

  • Carefully account for local variation in analyzing police stop data, ideally using multilevel modeling techniques to adjust for precinct-level variability and avoid omitted variable biases. (Gelman, Fagan, and Kiss 2007)

>> Statistical Biases and Benchmarking in Police Use-of-Force Studies
  • Avoid using Cesario et al.s (2019) benchmarking methodology for measuring racial disparities in police use of lethal force, as it introduces strong statistical biases that mask true racial disparities, particularly in the killing of unarmed non-criminals. Instead, researchers should employ formally derived criminality-correcting benchmarks to accurately capture racial disparities in police violence. (C. T. Ross, Winterhalder, and McElreath 2020)

  • Carefully distinguish between population-level and encounter-conditional estimates of racial disparities in police use-of-force, as these estimates provide different information and can sometimes produce seemingly contradictory results due to factors like selection effects and heterogeneity in encounter rates. (C. T. Ross, Winterhalder, and McElreath 2018)

> Causal Inference through Natural Experiments & Rigorous Study Design

>> Natural Experiments for Isolating Compulsory Licensing Effects
  • Consider using natural experiments, like the 1956 Bell consent decree, to isolate the causal effects of compulsory licensing on innovation while avoiding confounding factors associated with changes in market structure. (Watzinger et al. 2020)

> Evaluation Frameworks for Advanced Language Models

>> Incorporating Domain Expertise into Model Evaluations
  • Actively involve domain experts in the creation of evaluation tasks for large language models (LLMs) to ensure that the tasks accurately reflect real-world scenarios and enable meaningful engagement in discussions of LLM performance using familiar terminology and conceptual frameworks. (Guha et al. 2023)

  • Carefully consider the appropriateness of your annotation scheme for the specific domain they are studying, particularly when working with legal texts, where traditional logical categories may not capture the nuances of legal reasoning. (NA?)

>> Assessing AI Performance on High-Stakes Professional Tasks
  • Consider evaluating the performance of advanced language models, such as GPT-4, on complex and high-stakes tasks, such as the Uniform Bar Exam, to better understand your capabilities and limitations. (Katz et al. 2023)

References

2015. Journal of Public Transportation 18 (June). https://doi.org/10.5038/2375-0901.18.2.
———. n.d. https://doi.org/10.1371/journal.pmed.0050201.t001.
Aad, G., T. Abajyan, B. Abbott, J. Abdallah, S. Abdel Khalek, A. A. Abdelalim, O. Abdinov, et al. 2012. “Observation of a New Particle in the Search for the Standard Model Higgs Boson with the ATLAS Detector at the LHC.” Physics Letters B 716 (September). https://doi.org/10.1016/j.physletb.2012.08.020.
Aarts, Sil, BjöRn Winkens, and Marjan van Den Akker. 2011. “The Insignificance of Statistical Significance.” European Journal of General Practice 18 (December). https://doi.org/10.3109/13814788.2011.618222.
Abadi, Martín, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. “TensorFlow: A System for Large-Scale Machine Learning.” arXiv. https://doi.org/10.48550/ARXIV.1605.08695.
Abadie, Alberto. 2000. “Semiparametric Estimation of Instrumental Variable Models for Causal Effects,” September. https://doi.org/10.3386/t0260.
Abito, Jose Miguel. 2020. “Estimating Production Functions with Fixed Effects.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3510068.
“Abstract.” n.d. https://doi.org/10.7554/elife.01149.001.
Acemoglu, Daron, and Martin Kaae Jensen. 2009. “Aggregate Comparative Statics.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1374641.
———. 2013. “Aggregate Comparative Statics.” Games and Economic Behavior 81 (September). https://doi.org/10.1016/j.geb.2013.03.009.
Achen, Christopher H. 2005. “Let’s Put Garbage-Can Regressions and Garbage-Can Probits Where They Belong.” Conflict Management and Peace Science 22 (September). https://doi.org/10.1080/07388940500339167.
Aert, Robbie C. M. van, Jelte M. Wicherts, and Marcel A. L. M. van Assen. 2016. “Conducting Meta-Analyses Based on <i>p</i> Values.” Perspectives on Psychological Science 11 (September). https://doi.org/10.1177/1745691616650874.
———. 2019. “Publication Bias Examined in Meta-Analyses from Psychology and Medicine: A Meta-Meta-Analysis.” PLOS ONE 14 (April). https://doi.org/10.1371/journal.pone.0215052.
Aert, Robbie Cornelis Maria van, and Marcel A. L. M. van Assen. 2018. “Correcting for Publication Bias in a Meta-Analysis with the p-Uniform* Method,” October. https://doi.org/10.31222/osf.io/zqjr9.
Aguinis, Herman, Matt Vassar, and Cole Wayant. 2019. “On Reporting and Interpreting Statistical Significance and p Values in Medical Research.” BMJ Evidence-Based Medicine 26 (November). https://doi.org/10.1136/bmjebm-2019-111264.
Alfons, Andreas, Christophe Croux, and Sarah Gelper. 2013. “Sparse Least Trimmed Squares Regression for Analyzing High-Dimensional Large Data Sets.” The Annals of Applied Statistics 7 (March). https://doi.org/10.1214/12-aoas575.
Alsan, Marcella, and Claudia Goldin. 2019. “Watersheds in Child Mortality: The Role of Effective Water and Sewerage Infrastructure, 1880–1920.” Journal of Political Economy 127 (April). https://doi.org/10.1086/700766.
Alvarez, Ignacio, Jarad Niemi, and Matt Simpson. 2014. “Bayesian Inference for a Covariance Matrix.” arXiv. https://doi.org/10.48550/ARXIV.1408.4050.
“American Economic Journal: Applied Economics.” n.d. https://doi.org/10.1257/app.
An, Zhe, Donghui Li, and Jin Yu. 2015. “Firm Crash Risk, Information Environment, and Speed of Leverage Adjustment.” Journal of Corporate Finance 31 (April). https://doi.org/10.1016/j.jcorpfin.2015.01.015.
Anderson, David R., Kenneth P. Burnham, and William L. Thompson. 2000. “Null Hypothesis Testing: Problems, Prevalence, and an Alternative.” The Journal of Wildlife Management 64 (October). https://doi.org/10.2307/3803199.
Andrews, Martyn, Thorsten Schank, Richard Upward, Martyn Andrews, Thorsten Schank, and Richard Upward. 2006. “Practical Fixed-Effects Estimation Methods for the Three-Way Error-Components Model.” Unknown. https://doi.org/10.22004/AG.ECON.119239.
Angrist, Joshua, and Michal Kolesár. 2021. “One Instrument to Rule Them All: The Bias and Coverage of Just-ID IV.” arXiv. https://doi.org/10.48550/ARXIV.2110.10556.
Ankel-Peters, Jörg, and Christoph M. Schmidt. 2023. “Rural Electrification, the Credibility Revolution, and the Limits of Evidence-Based Policy.” https://doi.org/10.4419/96973220.
Anscombe, F. J., and R. J. Aumann. 1963. “A Definition of Subjective Probability.” The Annals of Mathematical Statistics 34 (March). https://doi.org/10.1214/aoms/1177704255.
Antinyan, Armenak, and Zareh Asatryan. 2019. “Nudging for Tax Compliance: A Meta-Analysis.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3500744.
Arkolakis, Costas. 2010. “Market Penetration Costs and the New Consumers Margin in International Trade.” Journal of Political Economy 118 (December). https://doi.org/10.1086/657949.
Armitage, P., C. K. McPherson, and B. C. Rowe. 1969. “Repeated Significance Tests on Accumulating Data.” Journal of the Royal Statistical Society. Series A (General) 132. https://doi.org/10.2307/2343787.
Ashworth, Scott, and Ethan Bueno de Mesquita. 2005. “Monotone Comparative Statics for Models of Politics.” American Journal of Political Science 50 (December). https://doi.org/10.1111/j.1540-5907.2006.00180.x.
Askarov, Zohid, Anthony Doucouliagos, Hristos Doucouliagos, and T D Stanley. 2022. “The Significance of Data-Sharing Policy.” Journal of the European Economic Association 21 (September). https://doi.org/10.1093/jeea/jvac053.
Asparouhov, Tihomir, and Bengt Muthén. 2014. “Auxiliary Variables in Mixture Modeling: Three-Step Approaches Using m<i>plus</i>.” Structural Equation Modeling: A Multidisciplinary Journal 21 (June). https://doi.org/10.1080/10705511.2014.915181.
Athey, S. 2002. “Monotone Comparative Statics Under Uncertainty.” The Quarterly Journal of Economics 117 (February). https://doi.org/10.1162/003355302753399481.
Auerbach, Alan J, and Yuriy Gorodnichenko. 2012. “Measuring the Output Responses to Fiscal Policy.” American Economic Journal: Economic Policy 4 (May). https://doi.org/10.1257/pol.4.2.1.
Babyak, M. A. 2004. “What You See May Not Be What You Get: A Brief, Nontechnical Introduction to Overfitting in Regression-Type Models.” Psychosomatic Medicine 66 (May). https://doi.org/10.1097/01.psy.0000127692.23278.a9.
Baddeley, Adrian, and Rolf Turner. 2005. “<B>spatstat</b>: An<i>r</i>package for Analyzing Spatial Point Patterns.” Journal of Statistical Software 12. https://doi.org/10.18637/jss.v012.i06.
Bahadur, R. R., and Leonard J. Savage. 1956. “The Nonexistence of Certain Statistical Procedures in Nonparametric Problems.” The Annals of Mathematical Statistics 27 (December). https://doi.org/10.1214/aoms/1177728077.
Bai, Jushan, and Serena Ng. 2010. “INSTRUMENTAL VARIABLE ESTIMATION IN a DATA RICH ENVIRONMENT.” Econometric Theory 26 (March). https://doi.org/10.1017/s0266466609990727.
Baio, Gianluca, and Marta Blangiardo. 2010. “Bayesian Hierarchical Model for the Prediction of Football Results.” Journal of Applied Statistics 37 (January). https://doi.org/10.1080/02664760802684177.
Bajzik, Josef, Tomas Havranek, Zuzana Irsova, and Jiri Schwarz. 2020. “Estimating the Armington Elasticity: The Importance of Study Design and Publication Bias.” Journal of International Economics 127 (November). https://doi.org/10.1016/j.jinteco.2020.103383.
Baker, Matthew E., and Ryan S. King. 2010. “A New Method for Detecting and Interpreting Biodiversity and Ecological Community Thresholds.” Methods in Ecology and Evolution 1 (February). https://doi.org/10.1111/j.2041-210x.2009.00007.x.
Balbus, Lukasz, Wojciech Olszewski, Kevin L. Reffett, and Lukasz Patryk Wozny. 2022. “Iterative Monotone Comparative Statics.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4039543.
Banfield, Jeffrey D., and Adrian E. Raftery. 1993. “Model-Based Gaussian and Non-Gaussian Clustering.” Biometrics 49 (September). https://doi.org/10.2307/2532201.
Barbieri, Maria M., James O. Berger, Edward I. George, and Veronika Ročková. 2021. “The Median Probability Model and Correlated Variables.” Bayesian Analysis 16 (December). https://doi.org/10.1214/20-ba1249.
Barron, Andrew, Lucien Birgé, and Pascal Massart. 1999. “Risk Bounds for Model Selection via Penalization.” Probability Theory and Related Fields 113 (February). https://doi.org/10.1007/s004400050210.
Bartels, Larry M. 1991. “Instrumental and "Quasi-Instrumental" Variables.” American Journal of Political Science 35 (August). https://doi.org/10.2307/2111566.
Barthel, Anne-Christine, and Tarun Sabarwal. 2017. “Directional Monotone Comparative Statics.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3322667.
Bartolucci, Francesco, and Valentina Nigro. 2009. “Pseudo Conditional Maximum Likelihood Estimation of the Dynamic Logit Model for Binary Panel Data.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1081146.
Beck, Nathaniel. 2010. “Causal Process ‘Observation’: Oxymoron or (Fine) Old Wine.” Political Analysis 18. https://doi.org/10.1093/pan/mpq023.
Benjamini, Yoav, and Yulia Gavrilov. 2009. “A Simple Forward Selection Procedure Based on False Discovery Rate Control.” The Annals of Applied Statistics 3 (March). https://doi.org/10.1214/08-aoas194.
Benjamini, Yoav, and Yosef Hochberg. 1995. “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society: Series B (Methodological) 57 (January). https://doi.org/10.1111/j.2517-6161.1995.tb02031.x.
Ben-Michael, Eli, Kosuke Imai, and Zhichao Jiang. 2022. “Policy Learning with Asymmetric Counterfactual Utilities.” arXiv. https://doi.org/10.48550/ARXIV.2206.10479.
Berry, Steven T, and Giovanni Compiani. 2022. “An Instrumental Variable Approach to Dynamic Models.” The Review of Economic Studies 90 (September). https://doi.org/10.1093/restud/rdac061.
Birgé, Lucien, and Pascal Massart. 1993. “Rates of Convergence for Minimum Contrast Estimators.” Probability Theory and Related Fields 97 (March). https://doi.org/10.1007/bf01199316.
———. 2006. “Minimal Penalties for Gaussian Model Selection.” Probability Theory and Related Fields 138 (July). https://doi.org/10.1007/s00440-006-0011-8.
Birnbaum, Michael H. 1973. “The Devil Rides Again: Correlation as an Index of Fit.” Psychological Bulletin 79 (April). https://doi.org/10.1037/h0033853.
Bodnar, Olha, and Taras Bodnar. 2023. “Objective Bayesian Meta-Analysis Based on Generalized Marginal Multivariate Random Effects Model.” Bayesian Analysis -1 (January). https://doi.org/10.1214/23-ba1363.
Bolin, David, and Finn Lindgren. 2011. “Spatial Models Generated by Nested Stochastic Partial Differential Equations, with an Application to Global Ozone Mapping.” The Annals of Applied Statistics 5 (March). https://doi.org/10.1214/10-aoas383.
Bommarito, Jillian, Michael Bommarito, Daniel Martin Katz, and Jessica Katz. 2023. “GPT as Knowledge Worker: A Zero-Shot Evaluation of (AI)CPA Capabilities.” arXiv. https://doi.org/10.48550/ARXIV.2301.04408.
Bonovas, Stefanos, and Daniele Piovani. 2023. “On p-Values and Statistical Significance.” Journal of Clinical Medicine 12 (January). https://doi.org/10.3390/jcm12030900.
Borenstein, Michael, Larry V. Hedges, Julian P. T. Higgins, and Hannah R. Rothstein. 2009. “Introduction to Meta‐analysis,” March. https://doi.org/10.1002/9780470743386.
Bound, John, David A. Jaeger, and Regina M. Baker. 1995. “Problems with Instrumental Variables Estimation When the Correlation Between the Instruments and the Endogeneous Explanatory Variable Is Weak.” Journal of the American Statistical Association 90 (June). https://doi.org/10.2307/2291055.
Bowmaker, Simon W. 2012. “The Art and Practice of Economics Research,” September. https://doi.org/10.4337/9781849808477.
Box, G. E. P., and David A. Pierce. 1970. “Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models.” Journal of the American Statistical Association 65 (December). https://doi.org/10.1080/01621459.1970.10481180.
Braakmann, Nils. 2022. “Does Stop and Search Reduce Crime? Evidence from Street-Level Data and a Surge in Operations Following a High-Profile Crime.” Journal of the Royal Statistical Society Series A: Statistics in Society 185 (April). https://doi.org/10.1111/rssa.12839.
Braumoeller, Bear F. 2006. “Explaining Variance; or, Stuck in a Moment We Can’t Get Out Of.” Political Analysis 14. https://doi.org/10.1093/pan/mpj009.
Breiman, Leo. 2001. “Statistical Modeling: The Two Cultures (with Comments and a Rejoinder by the Author).” Statistical Science 16 (August). https://doi.org/10.1214/ss/1009213726.
Breiman, Leo, and Jerome H. Friedman. 1985. “Estimating Optimal Transformations for Multiple Regression and Correlation.” Journal of the American Statistical Association 80 (September). https://doi.org/10.1080/01621459.1985.10478157.
Brodeur, Abel, Scott Carrell, David Figlio, and Lester Lusher. 2023. “Unpacking p-Hacking and Publication Bias.” American Economic Review 113 (November). https://doi.org/10.1257/aer.20210795.
Brodeur, Abel, Nikolai Cook, Jonathan Hartley, and Anthony Heyes. 2022. “Do Pre-Registration and Pre-Analysis Plans Reduce p-Hacking and Publication Bias?” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4180594.
Brodeur, Abel, Nikolai Cook, and Anthony Heyes. 2020. “Methods Matter: P-Hacking and Publication Bias in Causal Analysis in Economics.” American Economic Review 110 (November). https://doi.org/10.1257/aer.20190687.
———. 2022a. “We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4188289.
———. 2022b. “Methods Matter: P-Hacking and Publication Bias in Causal Analysis in Economics: Reply.” American Economic Review 112 (September). https://doi.org/10.1257/aer.20220277.
Brodeur, Abel, Nikolai Cook, and Carina Neisser. 2024. “P-Hacking, Data Type and Data-Sharing Policy.” The Economic Journal, January. https://doi.org/10.1093/ej/uead104.
Brodeur, Abel, Mathias Lé, Marc Sangnier, and Yanos Zylberberg. 2016. “Star Wars: The Empirics Strike Back.” American Economic Journal: Applied Economics 8 (January). https://doi.org/10.1257/app.20150044.
Browner, Warren S. 1987. “Are All Significant p Values Created Equal?” JAMA 257 (May). https://doi.org/10.1001/jama.1987.03390180077027.
Bruns, Stephan B., and John P. A. Ioannidis. 2016. “P-Curve and p-Hacking in Observational Research.” PLOS ONE 11 (February). https://doi.org/10.1371/journal.pone.0149144.
Bueno, Matheus, and Marica Valente. 2019. “The Effects of Pricing Waste Generation: A Synthetic Control Approach.” Journal of Environmental Economics and Management 96 (July). https://doi.org/10.1016/j.jeem.2019.06.004.
Bunn, Andrew G. 2008. “A Dendrochronology Program Library in r (dplR).” Dendrochronologia 26 (October). https://doi.org/10.1016/j.dendro.2008.01.002.
Burns, Shaun Michael, and Larry H. Ludlow. 2005. “Understanding Student Evaluations of Teaching Quality: The Contributions of Class Attendance.” Journal of Personnel Evaluation in Education 18 (May). https://doi.org/10.1007/s11092-006-9002-7.
Campbell, Harlan. 2022. “The World of Research Has Gone Berserk.” Open Science Framework, August. https://doi.org/10.17605/OSF.IO/YQCVA.
Cattaneo, Matias D., Michael Jansson, and Whitney K. Newey. 2018. “Inference in Linear Regression Models with Many Covariates and Heteroscedasticity.” Journal of the American Statistical Association 113 (June). https://doi.org/10.1080/01621459.2017.1328360.
Celentano, David D, Elizabeth Platz, and Shruti H Mehta. 2019. “The Centennial of the Department of Epidemiology at Johns Hopkins Bloomberg School of Public Health: A Century of Epidemiologic Discovery and Education.” American Journal of Epidemiology 188 (September). https://doi.org/10.1093/aje/kwz176.
Češnovar, Rok, Steve Bronder, Davor Sluga, Jure Demšar, Tadej Ciglarič, Sean Talts, and Erik Štrumbelj. 2019. “GPU-Based Parallel Computation Support for Stan.” arXiv. https://doi.org/10.48550/ARXIV.1907.01063.
Chambers, Christopher P., Federico Echenique, and Kota Saito. 2016. “Testing Theories of Financial Decision Making.” Proceedings of the National Academy of Sciences 113 (March). https://doi.org/10.1073/pnas.1517760113.
Charbonneau, Karyne B. 2014. “Multiple Fixed Effects in Binary Response Panel Data Models.” Bank of Canada. https://doi.org/10.34989/SWP-2014-17.
Chen, Mingli, Iván Fernández-Val, and Martin Weidner. 2014. “Nonlinear Factor Models for Network and Panel Data.” arXiv. https://doi.org/10.48550/ARXIV.1412.5647.
Chen, Yu, Alexander van Geen, Joseph H. Graziano, Alexander Pfaff, Malgosia Madajewicz, Faruque Parvez, A. Z. M. Iftekhar Hussain, Vesna Slavkovich, Tariqul Islam, and Habibul Ahsan. 2007. “Reduction in Urinary Arsenic Levels in Response to Arsenic Mitigation Efforts in Araihazar, Bangladesh.” Environmental Health Perspectives 115 (June). https://doi.org/10.1289/ehp.9833.
Chernozhukov, Victor, Christian Hansen, and Michael Jansson. 2007. “Inference Approaches for Instrumental Variable Quantile Regression.” Economics Letters 95 (May). https://doi.org/10.1016/j.econlet.2006.10.016.
Chesher, Andrew, and Adam Rosen. 2014. “Generalized Instrumental Variable Models,” January. https://doi.org/10.1920/wp.cem.2014.0414.
Chuard, Pierre J. C., Milan Vrtílek, Megan L. Head, and Michael D. Jennions. 2019. “Evidence That Nonsignificant Results Are Sometimes Preferred: Reverse p-Hacking or Selective Reporting?” PLOS Biology 17 (January). https://doi.org/10.1371/journal.pbio.3000127.
Clarke, Kevin A., and David M. Primo. 2007. “Modernizing Political Science: A Model-Based Approach.” Perspectives on Politics 5 (November). https://doi.org/10.1017/s1537592707072192.
Clarke, Paul S., and Frank Windmeijer. 2012. “Instrumental Variable Estimators for Binary Outcomes.” Journal of the American Statistical Association 107 (October). https://doi.org/10.1080/01621459.2012.734171.
“Classic Works of the Dempster-Shafer Theory of Belief Functions.” 2008. Studies in Fuzziness and Soft Computing. https://doi.org/10.1007/978-3-540-44792-4.
Cleveland, William S. 1979. “Robust Locally Weighted Regression and Smoothing Scatterplots.” Journal of the American Statistical Association 74 (December). https://doi.org/10.1080/01621459.1979.10481038.
Cleveland, William S., and Susan J. Devlin. 1988. “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting.” Journal of the American Statistical Association 83 (September). https://doi.org/10.1080/01621459.1988.10478639.
Cobb, George W. 2015. “Mere Renovation Is Too Little Too Late: We Need to Rethink Our Undergraduate Curriculum from the Ground Up.” arXiv. https://doi.org/10.48550/ARXIV.1507.05346.
Coffman, Lucas C., and Muriel Niederle. 2015. “Pre-Analysis Plans Have Limited Upside, Especially Where Replications Are Feasible.” Journal of Economic Perspectives 29 (August). https://doi.org/10.1257/jep.29.3.81.
Cohen, H. W. 2011. “P Values: Use and Misuse in Medical Literature.” American Journal of Hypertension 24 (January). https://doi.org/10.1038/ajh.2010.205.
Cole, Matthew A., Robert J R Elliott, and Bowen Liu. 2020. “The Impact of the Wuhan Covid-19 Lockdown on Air Pollution and Health: A Machine Learning and Augmented Synthetic Control Approach.” Environmental and Resource Economics 76 (August). https://doi.org/10.1007/s10640-020-00483-4.
“Comparative Statics, Informativeness, and the Interval Dominance Order.” 2009. Econometrica 77. https://doi.org/10.3982/ecta7583.
Concato, John, and John A Hartigan. 2016. “P Values: From Suggestion to Superstition.” Journal of Investigative Medicine 64 (October). https://doi.org/10.1136/jim-2016-000206.
Conley, Timothy G., Christian Hansen, Robert E. McCulloch, and Peter E. Rossi. 2007. “A Semi-Parametric Bayesian Approach to the Instrumental Variable Problem.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.917432.
“Contributions to Probability and Statistics.” 1989. https://doi.org/10.1007/978-1-4612-3678-8.
Cook, Jonathan A, Steven A Julious, William Sones, Lisa V Hampson, Catherine Hewitt, Jesse A Berlin, Deborah Ashby, et al. 2018. “DELTA<sup>2</Sup> Guidance on Choosing the Target Difference and Undertaking and Reporting the Sample Size Calculation for a Randomised Controlled Trial.” BMJ, November. https://doi.org/10.1136/bmj.k3750.
Correia, Sergio, Paulo Guimarães, and Tom Zylkin. 2020. “Fast Poisson Estimation with High-Dimensional Fixed Effects.” The Stata Journal: Promoting Communications on Statistics and Stata 20 (March). https://doi.org/10.1177/1536867x20909691.
Cox, D. R. 2020. “Statistical Significance.” Annual Review of Statistics and Its Application 7 (March). https://doi.org/10.1146/annurev-statistics-031219-041051.
Cox, Nicholas J. 2002. “Speaking Stata: How to Move Step by: Step.” The Stata Journal: Promoting Communications on Statistics and Stata 2 (March). https://doi.org/10.1177/1536867x0200200106.
Crüwell, Sophia, Johnny van Doorn, Alexander Etz, Matthew C. Makel, Hannah Moshontz, Jesse C. Niebaum, Amy Orben, Sam Parsons, and Michael Schulte-Mecklenbeck. 2019. “Seven Easy Steps to Open Science.” Zeitschrift Für Psychologie 227 (October). https://doi.org/10.1027/2151-2604/a000387.
Cruz-Gonzalez, Mario, Ivan Fernandez-Val, and Martin Weidner. 2016. “Probitfe and Logitfe: Bias Corrections for Probit and Logit Models with Two-Way Fixed Effects.” arXiv. https://doi.org/10.48550/ARXIV.1610.07714.
Cumming, Geoff. 2008. “Replication and <i>p</i> Intervals: <I>p</i> Values Predict the Future Only Vaguely, but Confidence Intervals Do Much Better.” Perspectives on Psychological Science 3 (July). https://doi.org/10.1111/j.1745-6924.2008.00079.x.
Curello, Gregorio, and Ludvig Sinander. 2022. “The Comparative Statics of Persuasion.” arXiv. https://doi.org/10.48550/ARXIV.2204.07474.
Cusumano-Towner, Marco F., Feras A. Saad, Alexander K. Lew, and Vikash K. Mansinghka. 2019. “Gen: A General-Purpose Probabilistic Programming System with Programmable Inference.” Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, June. https://doi.org/10.1145/3314221.3314642.
D’Haultfœuille, Xavier, Ao Wang, Philippe Février, and Lionel Wilner. 2022. “Estimating the Gains (and Losses) of Revenue Management.” arXiv. https://doi.org/10.48550/ARXIV.2206.04424.
Dahiru, T. 2011. “P-Value, a True Test of Statistical Significance? A Cautionary Note.” Annals of Ibadan Postgraduate Medicine 6 (March). https://doi.org/10.4314/aipm.v6i1.64038.
Dehejia, Rajeev H. 2005. “Program Evaluation as a Decision Problem.” Journal of Econometrics 125 (March). https://doi.org/10.1016/j.jeconom.2004.04.006.
DeJong, David N., Beth F. Ingram, and Charles H. Whiteman. 2000. “A Bayesian Approach to Dynamic Macroeconomics.” Journal of Econometrics 98 (October). https://doi.org/10.1016/s0304-4076(00)00019-1.
Denison, D. G. T., B. K. Mallick, and A. F. M. Smith. 1998. “Automatic Bayesian Curve Fitting.” Journal of the Royal Statistical Society Series B: Statistical Methodology 60 (July). https://doi.org/10.1111/1467-9868.00128.
Dezeure, Ruben, Peter Bühlmann, Lukas Meier, and Nicolai Meinshausen. 2015. “High-Dimensional Inference: Confidence Intervals, \(p\)-Values and r-Software Hdi.” Statistical Science 30 (November). https://doi.org/10.1214/15-sts527.
Dhaene, Geert, and Yutao Sun. 2021. “Second-Order Corrected Likelihood for Nonlinear Panel Models with Fixed Effects.” Journal of Econometrics 220 (February). https://doi.org/10.1016/j.jeconom.2020.04.001.
Dickey, David A., and Wayne A. Fuller. 1979. “Distribution of the Estimators for Autoregressive Time Series with a Unit Root.” Journal of the American Statistical Association 74 (June). https://doi.org/10.1080/01621459.1979.10482531.
Dillon, Joshua V., Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A. Saurous. 2017. “TensorFlow Distributions.” arXiv. https://doi.org/10.48550/ARXIV.1711.10604.
Ding, Peng. 2024. “Linear Model and Extensions.” arXiv. https://doi.org/10.48550/ARXIV.2401.00649.
Dixon, W. J. 1953. “Processing Data for Outliers.” Biometrics 9 (March). https://doi.org/10.2307/3001634.
Donoho, David L., and Iain M. Johnstone. 1995. “Adapting to Unknown Smoothness via Wavelet Shrinkage.” Journal of the American Statistical Association 90 (December). https://doi.org/10.1080/01621459.1995.10476626.
———. 1998. “Minimax Estimation via Wavelet Shrinkage.” The Annals of Statistics 26 (June). https://doi.org/10.1214/aos/1024691081.
Donoho, David, and Andrea Montanari. 2015. “High Dimensional Robust m-Estimation: Asymptotic Variance via Approximate Message Passing.” Probability Theory and Related Fields 166 (November). https://doi.org/10.1007/s00440-015-0675-z.
Donohue, John J., Abhay Aneja, and Kyle D. Weber. 2019. “Right‐to‐carry Laws and Violent Crime: A Comprehensive Assessment Using Panel Data and a State‐level Synthetic Control Analysis.” Journal of Empirical Legal Studies 16 (May). https://doi.org/10.1111/jels.12219.
Dreber, Anna, Magnus Johanneson, and Yifan Yang. 2023. “Selective Reporting of Placebo Tests in Top Economics Journals.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4456494.
DRUCKMAN, JAMES N., DONALD P. GREEN, JAMES H. KUKLINSKI, and ARTHUR LUPIA. 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100 (November). https://doi.org/10.1017/s0003055406062514.
Duan, Songyun, Anastasios Kementsietsidis, Kavitha Srinivas, and Octavian Udrea. 2011. “Apples and Oranges.” Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, June. https://doi.org/10.1145/1989323.1989340.
Duarte, Guilherme, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. 2021. “An Automated Approach to Causal Inference in Discrete Settings.” arXiv. https://doi.org/10.48550/ARXIV.2109.13471.
Dunson, David B., Natesh Pillai, and Ju-Hyun Park. 2007. “Bayesian Density Regression.” Journal of the Royal Statistical Society Series B: Statistical Methodology 69 (March). https://doi.org/10.1111/j.1467-9868.2007.00582.x.
Duval, Sue, and Richard Tweedie. 2000. “Trim and Fill: A Simple Funnel‐plot–Based Method of Testing and Adjusting for Publication Bias in Meta‐analysis.” Biometrics 56 (June). https://doi.org/10.1111/j.0006-341x.2000.00455.x.
Dzhumashev, Ratbek, and Ainura Tursunalieva. 2019. “Synthetic Instrumental Variables.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3370143.
Echenique, Federico, and Ivana Komunjer. 2013. “A Test for Monotone Comparative Statics.” Structural Econometric Models, December. https://doi.org/10.1108/s0731-9053(2013)0000032007.
“Editorial Statement on Negative Findings.” 2015. Health Economics 24 (March). https://doi.org/10.1002/hec.3172.
Edlin, Aaron S., and Chris Shannon. 1998. “Strict Monotonicity in Comparative Statics.” Journal of Economic Theory 81 (July). https://doi.org/10.1006/jeth.1998.2405.
Egami, Naoki, Christian J. Fong, Justin Grimmer, Margaret E. Roberts, and Brandon M. Stewart. 2018. “How to Make Causal Inferences Using Texts,” February. http://arxiv.org/abs/1802.02163v1.
Eggers, Andrew C., Guadalupe Tuñón, and Allan Dafoe. 2023. “Placebo Tests for Causal Inference.” American Journal of Political Science, August. https://doi.org/10.1111/ajps.12818.
Ehrenberg, Ronald G., Dominic J. Brewer, Adam Gamoran, and J. Douglas Willms. 2001. “Class Size and Student Achievement.” Psychological Science in the Public Interest 2 (May). https://doi.org/10.1111/1529-1006.003.
Ekstrom, Claus Thorn, and Helle Sørensen. 2014. “Introduction to Statistical Data Analysis for the Life Sciences,” November. https://doi.org/10.1201/b17625.
Elliott, Graham, Nikolay Kudrin, and Kaspar Wüthrich. 2022. “The Power of Tests for Detecting \(p\)-Hacking.” arXiv. https://doi.org/10.48550/ARXIV.2205.07950.
Elliott, Graham, Ulrich K. Müller, and Mark W. Watson. 2015. “Nearly Optimal Tests When a Nuisance Parameter Is Present Under the Null Hypothesis.” Econometrica 83. https://doi.org/10.3982/ecta10535.
Ellsberg, Daniel. 1961. “Risk, Ambiguity, and the Savage Axioms.” The Quarterly Journal of Economics 75 (November). https://doi.org/10.2307/1884324.
Etz, Alexander, and Joachim Vandekerckhove. 2016. “A Bayesian Perspective on the Reproducibility Project: Psychology.” PLOS ONE 11 (February). https://doi.org/10.1371/journal.pone.0149794.
Fan, Jianqing, and Runze Li. 2001. “Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association 96 (December). https://doi.org/10.1198/016214501753382273.
Fan, Jianqing, Chunming Zhang, and Jian Zhang. 2001. “Generalized Likelihood Ratio Statistics and Wilks Phenomenon.” The Annals of Statistics 29 (February). https://doi.org/10.1214/aos/996986505.
Fan, Jianqing, and Wenyang Zhang. 1999. “Statistical Estimation in Varying Coefficient Models.” The Annals of Statistics 27 (October). https://doi.org/10.1214/aos/1017939139.
Fernandez-Val, Ivan, and Francis Vella. 2007. “Bias Corrections for Two-Step Fixed Effects Panel Data Estimators.” Working Paper Series, February. https://doi.org/10.1920/wp.cem.2007.0407.
Fidler, Fiona, Cumming Geoff, Burgman Mark, and Thomason Neil. 2004. “Statistical Reform in Medicine, Psychology and Ecology.” The Journal of Socio-Economics 33 (November). https://doi.org/10.1016/j.socec.2004.09.035.
Fithian, William, Dennis Sun, and Jonathan Taylor. 2014. “Optimal Inference After Model Selection.” arXiv. https://doi.org/10.48550/ARXIV.1410.2597.
Fluegge, Keith R., and Kyle R. Fluegge. 2015. “Glyphosate Use Predicts ADHD Hospital Discharges in the Healthcare Cost and Utilization Project Net (HCUPnet): A Two-Way Fixed-Effects Analysis.” PLOS ONE 10 (August). https://doi.org/10.1371/journal.pone.0133525.
Fowler, James H., Timothy R. Johnson, James F. Spriggs, Sangick Jeon, and Paul J. Wahlbeck. 2007. “Network Analysis and the Law: Measuring the Legal Importance of Precedents at the u.s. Supreme Court.” Political Analysis 15. https://doi.org/10.1093/pan/mpm011.
Freund, Caroline L. 1998. “Multilateralism and the Endogenous Formation of PTAs.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.98390.
Fu, Wenjiang, and Keith Knight. 2000. “Asymptotics for Lasso-Type Estimators.” The Annals of Statistics 28 (October). https://doi.org/10.1214/aos/1015957397.
Gaines, Brian J., James H. Kuklinski, and Paul J. Quirk. 2007. “The Logic of the Survey Experiment Reexamined.” Political Analysis 15. https://doi.org/10.1093/pan/mpl008.
Galichon, Alfred, Yu-Wei Hsieh, and Maxime Sylvestre. 2023. “Monotone Comparative Statics for Submodular Functions, with an Application to Aggregated Deferred Acceptance.” arXiv. https://doi.org/10.48550/ARXIV.2304.12171.
Galichon, Alfred, Larry Samuelson, and Lucas Vernet. 2022. “Monotone Comparative Statics for Equilibrium Problems.” arXiv. https://doi.org/10.48550/ARXIV.2207.06731.
“Game Theory.” 2010. https://doi.org/10.1057/9780230280847.
Gao, Jian. 2020. “P-Values – a Chronic Conundrum.” BMC Medical Research Methodology 20 (June). https://doi.org/10.1186/s12874-020-01051-6.
Gardner, Beth, J. Andrew Royle, and Michael T. Wegan. 2009. “Hierarchical Models for Estimating Density from DNA Mark–Recapture Studies.” Ecology 90 (April). https://doi.org/10.1890/07-2112.1.
Gaudart, Jean, Laetitia Huiart, Paul J. Milligan, Rodolphe Thiebaut, and Roch Giorgi. 2014. “Reproducibility Issues in Science, Is <i>p</i> Value Really the Only Answer?” Proceedings of the National Academy of Sciences 111 (April). https://doi.org/10.1073/pnas.1323051111.
Gaure, Simen. 2014. “Correlation Bias Correction in Two‐way Fixed‐effects Linear Regression.” Stat 3 (March). https://doi.org/10.1002/sta4.68.
Gechert, Sebastian, Tomas Havranek, Zuzana Irsova, and Dominika Kolcunova. 2022. “Measuring Capital-Labor Substitution: The Importance of Method Choices and Publication Bias.” Review of Economic Dynamics 45 (July). https://doi.org/10.1016/j.red.2021.05.003.
Gelman, Andrew. 2013. “Commentary.” Epidemiology 24 (January). https://doi.org/10.1097/ede.0b013e31827886f7.
———. 2017. “The Failure of Null Hypothesis Significance Testing When Studying Incremental Changes, and What to Do about It.” Personality and Social Psychology Bulletin 44 (September). https://doi.org/10.1177/0146167217729162.
———. 2018. “Ethics in Statistical Practice and Communication: Five Recommendations.” Significance 15 (October). https://doi.org/10.1111/j.1740-9713.2018.01193.x.
———. 2022. “Criticism as Asynchronous Collaboration: An Example from Social Science Research.” Stat 11 (June). https://doi.org/10.1002/sta4.464.
Gelman, Andrew, Jeffrey Fagan, and Alex Kiss. 2007. “An Analysis of the New York City Police Department’s ‘Stop-and-Frisk’ Policy in the Context of Claims of Racial Bias.” Journal of the American Statistical Association 102 (September). https://doi.org/10.1198/016214506000001040.
Gelman, Andrew, Brian Haig, Christian Hennig, Art Owen, Robert Cousins, Stan Young, Christian Robert, et al. 2019. “Many Perspectives on Deborah Mayo’s "Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars".” arXiv. https://doi.org/10.48550/ARXIV.1905.08876.
Gelman, Andrew, and Christian Hennig. 2015. “Beyond Subjective and Objective in Statistics.” arXiv. https://doi.org/10.48550/ARXIV.1508.05453.
Gelman, Andrew, Daniel Lee, and Jiqiang Guo. 2015. “Stan.” Journal of Educational and Behavioral Statistics 40 (October). https://doi.org/10.3102/1076998615606113.
Gelman, Andrew, and Eric Loken. 2014. “The Statistical Crisis in Science.” American Scientist 102. https://doi.org/10.1511/2014.111.460.
Gelman, Andrew, and Hal Stern. 2006. “The Difference Between ‘Significant’ and ‘Not Significant’ Is Not Itself Statistically Significant.” The American Statistician 60 (November). https://doi.org/10.1198/000313006x152649.
Gelman, Andrew, and Aki Vehtari. 2020. “What Are the Most Important Statistical Ideas of the Past 50 Years?” arXiv. https://doi.org/10.48550/ARXIV.2012.00174.
GERRING, JOHN. 2004. “What Is a Case Study and What Is It Good For?” American Political Science Review 98 (May). https://doi.org/10.1017/s0003055404001182.
Gerring, John. 2010. “How Good Is Good Enough? A Multidimensional, Best-Possible Standard for Research Design.” Political Research Quarterly 64 (August). https://doi.org/10.1177/1065912910361221.
Gershkov, Alex, Benny Moldovanu, Philipp Strack, and Mengxi Zhang. 2021. “A Theory of Auctions with Endogenous Valuations.” Journal of Political Economy 129 (April). https://doi.org/10.1086/712735.
Ghosh, Riddhi Pratim, Bani Mallick, and Mohsen Pourahmadi. 2021. “Bayesian Estimation of Correlation Matrices of Longitudinal Data.” Bayesian Analysis 16 (September). https://doi.org/10.1214/20-ba1237.
Gill, Christopher J, Lora Sabin, and Christopher H Schmid. 2005. “Why Clinicians Are Natural Bayesians.” BMJ 330 (May). https://doi.org/10.1136/bmj.330.7499.1080.
Goldman, S. A., and R. H. Sloan. 1995. “Can PAC Learning Algorithms Tolerate Random Attribute Noise?” Algorithmica 14 (July). https://doi.org/10.1007/bf01300374.
Golubnitschaja, Olga, Manuel Debald, Kristina Yeghiazaryan, Walther Kuhn, Martin Pešta, Vincenzo Costigliola, and Godfrey Grech. 2016. “Breast Cancer Epidemic in the Early Twenty-First Century: Evaluation of Risk Factors, Cumulative Questionnaires and Recommendations for Preventive Measures.” Tumor Biology 37 (July). https://doi.org/10.1007/s13277-016-5168-x.
Gómez-de-Mariscal, Estibaliz, Vanesa Guerrero, Alexandra Sneider, Hasini Jayatilaka, Jude M. Phillip, Denis Wirtz, and Arrate Muñoz-Barrutia. 2021. “Use of the p-Values as a Size-Dependent Function to Address Practical Differences When Analyzing Large Datasets.” Scientific Reports 11 (October). https://doi.org/10.1038/s41598-021-00199-5.
Goodman, Leo A. 1960. “On the Exact Variance of Products.” Journal of the American Statistical Association 55 (December). https://doi.org/10.1080/01621459.1960.10483369.
Goodman, Steven. 2008. “A Dirty Dozen: Twelve p-Value Misconceptions.” Seminars in Hematology 45 (July). https://doi.org/10.1053/j.seminhematol.2008.04.003.
Goodman, Steven N. 1999. “Toward Evidence-Based Medical Statistics. 1: The p Value Fallacy.” Annals of Internal Medicine 130 (June). https://doi.org/10.7326/0003-4819-130-12-199906150-00008.
Gorinova, Maria I., Dave Moore, and Matthew D. Hoffman. 2019. “Automatic Reparameterisation of Probabilistic Programs.” arXiv. https://doi.org/10.48550/ARXIV.1906.03028.
Graetz, Nick, Courtney E. Boen, and Michael H. Esposito. 2022. “Structural Racism and Quantitative Causal Inference: A Life Course Mediation Framework for Decomposing Racial Health Disparities.” Journal of Health and Social Behavior 63 (January). https://doi.org/10.1177/00221465211066108.
Gralinski, Lisa E., and Vineet D. Menachery. 2020. “Return of the Coronavirus: 2019-nCoV.” Viruses 12 (January). https://doi.org/10.3390/v12020135.
Greenland, Sander, Stephen J. Senn, Kenneth J. Rothman, John B. Carlin, Charles Poole, Steven N. Goodman, and Douglas G. Altman. 2016. “Statistical Tests, p Values, Confidence Intervals, and Power: A Guide to Misinterpretations.” European Journal of Epidemiology 31 (April). https://doi.org/10.1007/s10654-016-0149-3.
Gromenko, Oleksandr, Piotr Kokoszka, Lie Zhu, and Jan Sojka. 2012. “Estimation and Testing for Spatially Indexed Curves with Application to Ionospheric and Magnetic Field Trends.” The Annals of Applied Statistics 6 (June). https://doi.org/10.1214/11-aoas524.
Groot, A. D. de. 2014. “The Meaning of ‘Significance’ for Different Types of Research [Translated and Annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han l. J. Van Der Maas].” Acta Psychologica 148 (May). https://doi.org/10.1016/j.actpsy.2014.02.001.
Guha, Neel, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, et al. 2023. “LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models.” arXiv. https://doi.org/10.48550/ARXIV.2308.11462.
Gupta, Alisha, and Frank Bosco. 2023. “Tempest in a Teacup: An Analysis of p-Hacking in Organizational Research.” PLOS ONE 18 (February). https://doi.org/10.1371/journal.pone.0281938.
Hall, Peter, and James Stephen Marron. 1987. “Extent to Which Least-Squares Cross-Validation Minimises Integrated Square Error in Nonparametric Density Estimation.” Probability Theory and Related Fields 74 (April). https://doi.org/10.1007/bf00363516.
Hall, Peter, and Prakash Patil. 1994. “Properties of Nonparametric Estimators of Autocovariance for Stationary Random Fields.” Probability Theory and Related Fields 99 (September). https://doi.org/10.1007/bf01199899.
“Handbook of Causal Analysis for Social Research.” 2013. Handbooks of Sociology and Social Research. https://doi.org/10.1007/978-94-007-6094-3.
Hansen, Bruce E. 1997. “Approximate Asymptotic<i>p</i>values for StructuraS-Change Tests.” Journal of Business &Amp; Economic Statistics 15 (January). https://doi.org/10.1080/07350015.1997.10524687.
Hansen, Lars Peter, and Massimo Marinacci. 2016. “Ambiguity Aversion and Model Misspecification: An Economic Perspective.” Statistical Science 31 (November). https://doi.org/10.1214/16-sts570.
Harder, Jenna A. 2020. “The Multiverse of Methods: Extending the Multiverse Analysis to Address Data-Collection Decisions.” Perspectives on Psychological Science 15 (June). https://doi.org/10.1177/1745691620917678.
Hartman, Erin, and F. Daniel Hidalgo. 2018. “An Equivalence Approach to Balance and Placebo Tests.” American Journal of Political Science 62 (September). https://doi.org/10.1111/ajps.12387.
HARVEY, CAMPBELL R. 2017. “Presidential Address: The Scientific Outlook in Financial Economics.” The Journal of Finance 72 (August). https://doi.org/10.1111/jofi.12530.
Hastie, Trevor, and Werner Stuetzle. 1989. “Principal Curves.” Journal of the American Statistical Association 84 (June). https://doi.org/10.1080/01621459.1989.10478797.
Hau, Arthur. 2001. “A General Theorem on the Comparative Statics of Changes in Risk.” The Geneva Papers on Risk and Insurance Theory 26 (June). https://doi.org/10.1023/a:1011260207279.
Hausman, Jerry A., Whitney K. Newey, Tiemen Woutersen, John C. Chao, and Norman R. Swanson. 2012. “Instrumental Variable Estimation with Heteroskedasticity and Many Instruments.” Quantitative Economics 3 (July). https://doi.org/10.3982/qe89.
Havranek, Tomas, Zuzana Irsova, Lubica Laslopova, and Olesia Zeynalova. 2022. “Publication and Attenuation Biases in Measuring Skill Substitution.” Review of Economics and Statistics, July. https://doi.org/10.1162/rest_a_01227.
Hays, Spencer, Haipeng Shen, and Jianhua Z. Huang. 2012. “Functional Dynamic Factor Models with Application to Yield Curve Forecasting.” The Annals of Applied Statistics 6 (September). https://doi.org/10.1214/12-aoas551.
Healy, Kieran. 2017. “Fuck Nuance.” Sociological Theory 35 (June). https://doi.org/10.1177/0735275117709046.
Herbert, Trevor. 2018. “The Robert Minter Collection.” https://doi.org/10.21954/OU.RD.7258499.V1.
Hernan, M. A., and S. R. Cole. 2009. “Invited Commentary: Causal Diagrams and Measurement Bias.” American Journal of Epidemiology 170 (September). https://doi.org/10.1093/aje/kwp293.
Hernán, Miguel A. 2017. “Invited Commentary: Selection Bias Without Colliders.” American Journal of Epidemiology 185 (May). https://doi.org/10.1093/aje/kwx077.
Hewitt, Catherine E, Natasha Mitchell, and David J Torgerson. 2008. “Listen to the Data When Results Are Not Significant.” BMJ 336 (January). https://doi.org/10.1136/bmj.39379.359560.ad.
Higgins, Ayden, and Koen Jochmans. 2022. “Bootstrap Inference for Fixed-Effect Models.” arXiv. https://doi.org/10.48550/ARXIV.2201.11156.
Hirschauer, Norbert, Sven Grüner, Oliver Mußhoff, and Claudia Becker. 2018. “Pitfalls of Significance Testing and \(p\)-Value Variability: An Econometrics Perspective.” Statistics Surveys 12 (January). https://doi.org/10.1214/18-ss122.
Ho, Joses, Tayfun Tumkaya, Sameer Aryal, Hyungwon Choi, and Adam Claridge-Chang. 2018. “Moving Beyond<i>p</i>values: Everyday Data Analysis with Estimation Plots,” July. https://doi.org/10.1101/377978.
Hoechle, Daniel. 2007. “Robust Standard Errors for Panel Regressions with Cross-Sectional Dependence.” The Stata Journal: Promoting Communications on Statistics and Stata 7 (September). https://doi.org/10.1177/1536867x0700700301.
Hoekstra, Rink, Richard D. Morey, Jeffrey N. Rouder, and Eric-Jan Wagenmakers. 2014. “Robust Misinterpretation of Confidence Intervals.” Psychonomic Bulletin &Amp; Review 21 (January). https://doi.org/10.3758/s13423-013-0572-3.
Hong, Yili, William Q. Meeker, and James D. McCalley. 2009. “Prediction of Remaining Life of Power Transformers Based on Left Truncated and Right Censored Lifetime Data.” The Annals of Applied Statistics 3 (June). https://doi.org/10.1214/00-aoas231.
Honoré, Bo E., Chris Muris, and Martin Weidner. 2021. “Dynamic Ordered Panel Logit Models.” arXiv. https://doi.org/10.48550/ARXIV.2107.03253.
Horie, Norio, and Ichiro Iwasaki. 2022. “Returns to Schooling in European Emerging Markets: A Meta-Analysis.” Education Economics 31 (February). https://doi.org/10.1080/09645292.2022.2036322.
Horn, Theara. 2011. “Incorporating Water Purification in Efficiency Evaluation: Evidence from Japanese Water Utilities.” Applied Economics Letters 18 (December). https://doi.org/10.1080/13504851.2011.564119.
Hu, Yingyao, and Susanne M. Schennach. 2008. “Instrumental Variable Treatment of Nonclassical Measurement Error Models.” Econometrica 76 (January). https://doi.org/10.1111/j.0012-9682.2008.00823.x.
Huang, Allen H., Hui Wang, and Yi Yang. 2023. “<Scp>FinBERT</Scp>: A Large Language Model for Extracting Information from Financial Text*.” Contemporary Accounting Research 40 (January). https://doi.org/10.1111/1911-3846.12832.
Hubbard, Raymond, and M. J Bayarri. 2003. “Confusion over Measures of Evidence (<i>p</i>’s) Versus Errors (<i>α</i>’s) in Classical Statistical Testing.” The American Statistician 57 (August). https://doi.org/10.1198/0003130031856.
Huelsenbeck, John P., and Bruce Rannala. 2004. “Frequentist Properties of Bayesian Posterior Probabilities of Phylogenetic Trees Under Simple and Complex Substitution Models.” Systematic Biology 53 (December). https://doi.org/10.1080/10635150490522629.
Huling, Jared D., and Peter Z. G. Qian. 2018. “Fast Penalized Regression and Cross Validation for Tall Data with the Oem Package.” arXiv. https://doi.org/10.48550/ARXIV.1801.09661.
HUMPHREYS, MACARTAN, and ALAN M. JACOBS. 2015. “Mixing Methods: A Bayesian Approach.” American Political Science Review 109 (November). https://doi.org/10.1017/s0003055415000453.
Humphreys, Macartan, Raul Sanchez de la Sierra, and Peter van der Windt. 2013. “Fishing, Commitment, and Communication: A Proposal for Comprehensive Nonbinding Research Registration.” Political Analysis 21. https://doi.org/10.1093/pan/mps021.
Huntington‐Klein, Nick, Andreu Arenas, Emily Beam, Marco Bertoni, Jeffrey R. Bloem, Pralhad Burli, Naibin Chen, et al. 2021. “The Influence of Hidden Researcher Decisions in Applied Microeconomics.” Economic Inquiry 59 (March). https://doi.org/10.1111/ecin.12992.
Ikromovna, Akhmedova Zulhumor. 2023. “SQL (STRUCTURED QUERY LANGUAGE) CAPABILITIES OF THE STATISTICAL DATABASE LANGUAGE.” Zenodo, December. https://doi.org/10.5281/ZENODO.10427776.
Imai, Kosuke, Gary King, and Olivia Lau. 2008. “Toward a Common Framework for Statistical Analysis and Development.” Journal of Computational and Graphical Statistics 17 (December). https://doi.org/10.1198/106186008x384898.
Imai, Kosuke, Gary King, and Carlos Velasco Rivera. 2020. “Do Nonpartisan Programmatic Policies Have Partisan Electoral Effects? Evidence from Two Large-Scale Experiments.” The Journal of Politics 82 (April). https://doi.org/10.1086/707059.
Imbens, Guido W. 2021. “Statistical Significance,<i>p</i>-Values, and the Reporting of Uncertainty.” Journal of Economic Perspectives 35 (August). https://doi.org/10.1257/jep.35.3.157.
Imbens, Guido W., and Joshua D. Angrist. 1994. “Identification and Estimation of Local Average Treatment Effects.” Econometrica 62 (March). https://doi.org/10.2307/2951620.
Imbens, Guido W., and Donald B. Rubin. 2015. “Causal Inference for Statistics, Social, and Biomedical Sciences,” April. https://doi.org/10.1017/cbo9781139025751.
“Instrumental Variable Models for Discrete Outcomes.” 2010. Econometrica 78. https://doi.org/10.3982/ecta7315.
Ioannidis, John P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2 (August). https://doi.org/10.1371/journal.pmed.0020124.
———. 2007. “Limitations Are Not Properly Acknowledged in the Scientific Literature.” Journal of Clinical Epidemiology 60 (April). https://doi.org/10.1016/j.jclinepi.2006.09.011.
———. 2018. “The Proposal to Lower <i>p</i> Value Thresholds to .005.” JAMA 319 (April). https://doi.org/10.1001/jama.2018.1536.
Ioannidis, John P. A., and Muin J. Khoury. 2014. “Assessing Value in Biomedical Research.” JAMA 312 (August). https://doi.org/10.1001/jama.2014.6932.
Ioannidis, John P. A., T. D. Stanley, and Hristos Doucouliagos. 2017. “The Power of Bias in Economics Research.” The Economic Journal 127 (October). https://doi.org/10.1111/ecoj.12461.
Ioannidis, John PA, and Thomas A Trikalinos. 2007. “An Exploratory Test for an Excess of Significant Findings.” Clinical Trials 4 (June). https://doi.org/10.1177/1740774507079441.
Irsova, Zuzana, Hristos Doucouliagos, Tomas Havranek, and T. D. Stanley. 2023. “Meta‐analysis of Social Science Research: A Practitioner’s Guide.” Journal of Economic Surveys, November. https://doi.org/10.1111/joes.12595.
Jans, Ivette. 1989. “Extensions and Applications of Principal-Agent Problems.” https://doi.org/10.26021/5389.
Jarociński, Marek, and Peter Karadi. 2020. “Deconstructing Monetary Policy Surprises— the Role of Information Shocks.” American Economic Journal: Macroeconomics 12 (April). https://doi.org/10.1257/mac.20180090.
Jensen, Martin Kaae. 2017. “Distributional Comparative Statics.” The Review of Economic Studies 85 (May). https://doi.org/10.1093/restud/rdx021.
Jia, Zeyang, Eli Ben-Michael, and Kosuke Imai. 2023. “Bayesian Safe Policy Learning with Chance Constrained Optimization: Application to Military Security Assessment During the Vietnam War.” arXiv. https://doi.org/10.48550/ARXIV.2307.08840.
Jiang, Liang, Peter C. B. Phillips, Yubo Tao, and Yichong Zhang. 2021. “Regression-Adjusted Estimation of Quantile Treatment Effects Under Covariate-Adaptive Randomizations.” arXiv. https://doi.org/10.48550/ARXIV.2105.14752.
Jochmans, Koen, and Vincenzo Verardi. 2020. “Fitting Exponential Regression Models with Two-Way Fixed Effects.” The Stata Journal: Promoting Communications on Statistics and Stata 20 (June). https://doi.org/10.1177/1536867x20931006.
Kagan, Evgeny, Stephen Leider, and William S. Lovejoy. 2018. “Ideation–Execution Transition in Product Development: An Experimental Analysis.” Management Science 64 (May). https://doi.org/10.1287/mnsc.2016.2709.
Kamenica, Emir, and Matthew Gentzkow. 2009. “Bayesian Persuasion,” November. https://doi.org/10.3386/w15540.
Karlan, Dean, Sneha Stephen, Jonathan Zinman, Keesler Welch, and Violetta Kuzmova. 2016. “Behind the GATE Experiment: Evidence on Effects of and Rationales for Subsidized Entrepreneurship Training.” AEA Randomized Controlled Trials, July. https://doi.org/10.1257/rct.1234.
Katki, H. A. 2008. “Invited Commentary: Evidence-Based Evaluation of p Values and Bayes Factors.” American Journal of Epidemiology 168 (June). https://doi.org/10.1093/aje/kwn148.
Katz, Daniel Martin, Dirk Hartung, Lauritz Gerlach, Abhik Jana, and Michael J. Bommarito. 2023. “Natural Language Processing in the Legal Domain.” arXiv. https://doi.org/10.48550/ARXIV.2302.12039.
Keele, Luke, Corrine McConnaughy, and Ismail White. 2012. “Strengthening the Experimenter’s Toolbox: Statistical Estimation of Internal Validity.” American Journal of Political Science 56 (February). https://doi.org/10.1111/j.1540-5907.2011.00576.x.
Kennedy, Edward H., Sivaraman Balakrishnan, and Max G’Sell. 2018. “Sharp Instruments for Classifying Compliers and Generalizing Causal Effects.” arXiv. https://doi.org/10.48550/ARXIV.1801.03635.
Kicinski, Michal. 2013. “Publication Bias in Recent Meta-Analyses.” PLoS ONE 8 (November). https://doi.org/10.1371/journal.pone.0081823.
Kilkenny, Carol, Nick Parsons, Ed Kadyszewski, Michael F. W. Festing, Innes C. Cuthill, Derek Fry, Jane Hutton, and Douglas G. Altman. 2009. “Survey of the Quality of Experimental Design, Statistical Analysis and Reporting of Research Using Animals.” PLoS ONE 4 (November). https://doi.org/10.1371/journal.pone.0007824.
King, Brian, and Daniel R. Kowal. 2023. “Warped Dynamic Linear Models for Time Series of Counts.” Bayesian Analysis -1 (January). https://doi.org/10.1214/23-ba1394.
King, Gary, Robert O. Keohane, and Sidney Verba. 1995. “The Importance of Research Design in Political Science.” American Political Science Review 89 (June). https://doi.org/10.2307/2082445.
Kingma, Diederik P, and Max Welling. 2013. “Auto-Encoding Variational Bayes.” arXiv. https://doi.org/10.48550/ARXIV.1312.6114.
Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. 2016. “Inherent Trade-Offs in the Fair Determination of Risk Scores.” arXiv. https://doi.org/10.48550/ARXIV.1609.05807.
Kliemann, Wolfgang. 1987. “Recurrence and Invariant Measures for Degenerate Diffusions.” The Annals of Probability 15 (April). https://doi.org/10.1214/aop/1176992166.
“Knowledge-Based Intelligent Information and Engineering Systems.” 2004. Lecture Notes in Computer Science. https://doi.org/10.1007/b100910.
Koch, Roy W., and Gary M. Smillie. 1986. “BIAS IN HYDROLOGIC PREDICTION USING LOG‐TRANSFORMED REGRESSION MODELS<sup>1</Sup>.” JAWRA Journal of the American Water Resources Association 22 (October). https://doi.org/10.1111/j.1752-1688.1986.tb00744.x.
Koladjo, Babagnidé François, Sylvie Escolano, and Pascale Tubert-Bitter. 2018. “Instrumental Variable Analysis in the Context of Dichotomous Outcome and Exposure with a Numerical Experiment in Pharmacoepidemiology.” BMC Medical Research Methodology 18 (June). https://doi.org/10.1186/s12874-018-0513-y.
Korbmacher, Max, Flavio Azevedo, Charlotte R. Pennington, Helena Hartmann, Madeleine Pownall, Kathleen Schmidt, Mahmoud Elsherif, et al. 2023. “The Replication Crisis Has Led to Positive Structural, Procedural, and Community Changes.” Communications Psychology 1 (July). https://doi.org/10.1038/s44271-023-00003-2.
Kotani, Daisuke, Eiji Oki, Yoshiaki Nakamura, Hiroki Yukami, Saori Mishima, Hideaki Bando, Hiromichi Shirasu, et al. 2023. “Molecular Residual Disease and Efficacy of Adjuvant Chemotherapy in Patients with Colorectal Cancer.” Nature Medicine 29 (January). https://doi.org/10.1038/s41591-022-02115-4.
Kukushkin, Nikolai S. 2011. “Monotone Comparative Statics: Changes in Preferences Versus Changes in the Feasible Set.” Economic Theory 52 (November). https://doi.org/10.1007/s00199-011-0677-8.
Kul, Seval. 2014. “INTERPRETATION OF STATISTICAL RESULTS: WHAT IS p VALUE AND CONFIDENCE INTERVAL?” Plevra Bulteni 8 (March). https://doi.org/10.5152/pb.2014.003.
Lahiri, Somdeb. 2011. “Comparative Statics of Oligopoly Equilibrium in a Pure Exchange Economy.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1744661.
Laird, Nan M., and James H. Ware. 1982. “Random-Effects Models for Longitudinal Data.” Biometrics 38 (December). https://doi.org/10.2307/2529876.
Lakens, Daniël. 2015. “On the Challenges of Drawing Conclusions from<i>p</i>-Values Just Below 0.05.” PeerJ 3 (July). https://doi.org/10.7717/peerj.1142.
———. 2021. “The Practical Alternative to the <i>p</i> Value Is the Correctly Used <i>p</i> Value.” Perspectives on Psychological Science 16 (February). https://doi.org/10.1177/1745691620958012.
Lakens, Daniël, and Alexander J. Etz. 2017. “Too True to Be Bad.” Social Psychological and Personality Science 8 (May). https://doi.org/10.1177/1948550617693058.
Lang, Stefan, and Andreas Brezger. 2004. “Bayesian p-Splines.” Journal of Computational and Graphical Statistics 13 (March). https://doi.org/10.1198/1061860043010.
Larcker, David F., and Tjomme O. Rusticus. 2008. “On the Use of Instrumental Variables in Accounting Research.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.694824.
Lavine, Isaac, Michael Lindon, and Mike West. 2021. “Adaptive Variable Selection for Sequential Prediction in Multivariate Dynamic Models.” Bayesian Analysis 16 (December). https://doi.org/10.1214/20-ba1245.
Leamer, Edward E. 2010. “Tantalus on the Road to Asymptopia.” Journal of Economic Perspectives 24 (May). https://doi.org/10.1257/jep.24.2.31.
Lee, David S., Justin McCrary, Marcelo J. Moreira, and Jack Porter. 2020. “Valid t-Ratio Inference for IV.” arXiv. https://doi.org/10.48550/ARXIV.2010.05058.
Letnes, Louise, and Julia Ann Kelly. 2002. “AgEcon Search: Partners Build a Web Resource.” Issues in Science and Technology Librarianship, May. https://doi.org/10.29173/istl1891.
Leung, Dennis, and Mathias Drton. 2014. “Order-Invariant Prior Specification in Bayesian Factor Analysis.” arXiv. https://doi.org/10.48550/ARXIV.1409.7672.
Lew, Michael J. 2013. “To p or Not to p: On the Evidential Nature of p-Values and Their Place in Scientific Inference.” arXiv. https://doi.org/10.48550/ARXIV.1311.0081.
———. 2019. “A Reckless Guide to p-Values.” Good Research Practice in Non-Clinical Pharmacology and Biomedicine. https://doi.org/10.1007/164_2019_286.
Li, Minghao, Xi He, Wendong Zhang, Shuyang Qu, Lulu Rodriguez, and James M. Gbeda. 2023. “Farmers’ Reactions to the US–China Trade War: Perceptions Versus Behaviors.” Journal of the Agricultural and Applied Economics Association 2 (June). https://doi.org/10.1002/jaa2.68.
Li, Qing, and Nan Lin. 2010. “The Bayesian Elastic Net.” Bayesian Analysis 5 (March). https://doi.org/10.1214/10-ba506.
Li, Wei. n.d. “Three Essays on Corporate Finance and Research Methodology.” https://doi.org/10.32657/10356/138252.
Liebl, Dominik. 2013. “Modeling and Forecasting Electricity Spot Prices: A Functional Data Perspective.” The Annals of Applied Statistics 7 (September). https://doi.org/10.1214/13-aoas652.
Litterman, Robert B. 1986. “Forecasting with Bayesian Vector Autoregressions—Five Years of Experience.” Journal of Business &Amp; Economic Statistics 4 (January). https://doi.org/10.1080/07350015.1986.10509491.
Liu, Yu-bin, Li Zhao, Jing Ding, Jie Zhu, Cheng-long Xie, Zhen-kai Wu, Xuan Yang, and Hai Li. 2016. “Association Between Maternal Age at Conception and Risk of Idiopathic Clubfoot.” Acta Orthopaedica 87 (February). https://doi.org/10.3109/17453674.2016.1153359.
Loenneker, Hannah D., Erin M. Buchanan, Ana Martinovici, Maximilian A. Primbs, Mahmoud M. Elsherif, Bradley J. Baker, Leonie A. Dudda, et al. 2024. “We Don’t Know What You Did Last Summer. On the Importance of Transparent Reporting of Reaction Time Data Pre-Processing.” Cortex 172 (March). https://doi.org/10.1016/j.cortex.2023.11.012.
Lopez, Michael J., Gregory J. Matthews, and Benjamin S. Baumer. 2018. “How Often Does the Best Team Win? A Unified Approach to Understanding Randomness in North American Sport.” The Annals of Applied Statistics 12 (December). https://doi.org/10.1214/18-aoas1165.
Lundberg, Ian, Rebecca Johnson, and Brandon Stewart. 2020. “What Is Your Estimand? Defining the Target Quantity Connects Statistical Evidence to Theory,” January. https://doi.org/10.31235/osf.io/ba67n.
Magnan, Nicholas. 2017. “Experimental Games to Teach Farmers about Weather Index Insurance.” AEA Randomized Controlled Trials, August. https://doi.org/10.1257/rct.2401-2.0.
MAHONEY, JAMES. 2000. “Strategies of Causal Inference in Small-n Analysis.” Sociological Methods &Amp; Research 28 (May). https://doi.org/10.1177/0049124100028004001.
Mahoney, James, and Gary Goertz. 2006. “A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research.” Political Analysis 14. https://doi.org/10.1093/pan/mpj017.
Mark, Daniel B., Kerry L. Lee, and Frank E. Harrell. 2016. “Understanding the Role of <i>p</i> Values and Hypothesis Tests in Clinical Research.” JAMA Cardiology 1 (December). https://doi.org/10.1001/jamacardio.2016.3312.
Matosin, Natalie, Elisabeth Frank, Martin Engel, Jeremy S. Lum, and Kelly A. Newell. 2014. “Negativity Towards Negative Results: A Discussion of the Disconnect Between Scientific Worth and Scientific Culture.” Disease Models &Amp; Mechanisms 7 (February). https://doi.org/10.1242/dmm.015123.
“Maximum-Entropy and Bayesian Methods in Science and Engineering.” 1988. https://doi.org/10.1007/978-94-009-3049-0.
McCauley, Stewart M., and Morten H. Christiansen. 2019. “Language Learning as Language Use: A Cross-Linguistic Model of Child Language Development.” Psychological Review 126 (January). https://doi.org/10.1037/rev0000126.
McElreath, Richard, and Paul E. Smaldino. 2015. “Replication, Communication, and the Population Dynamics of Scientific Discovery.” PLOS ONE 10 (August). https://doi.org/10.1371/journal.pone.0136088.
Meinshausen, Nicolai, Lukas Meier, and Peter Bühlmann. 2008. “P-Values for High-Dimensional Regression.” arXiv. https://doi.org/10.48550/ARXIV.0811.2177.
Meng, Xiaoyu, and Zhenlin Yang. 2022. “Unbalanced or Incomplete Spatial Panel Data Models with Fixed Effects.” https://doi.org/10.2139/ssrn.4290420.
Milunsky, Aubrey. 2003. “Lies, Damned Lies, and Medical Experts: The Abrogation of Responsibility by Specialty Organizations and a Call for Action.” Journal of Child Neurology 18 (June). https://doi.org/10.1177/08830738030180060401.
Mitsilegas. 2012. “Immigration Control in an Era of Globalization: Deflecting Foreigners, Weakening Citizens, and Strengthening the State.” Indiana Journal of Global Legal Studies 19. https://doi.org/10.2979/indjglolegstu.19.1.3.
Mittelhammer, Ron C. 2013. “Mathematical Statistics for Economics and Business.” https://doi.org/10.1007/978-1-4614-5022-1.
“Modelling and Prediction Honoring Seymour Geisser.” 1996. https://doi.org/10.1007/978-1-4612-2414-3.
“Monotone Comparative Statics with Separable Objective Functions.” 2010. Economics Bulletin. https://doi.org/10.5167/UZH-38325.
Monte, Enrico De. 2023. “Nonparametric Instrumental Regression with Two-Way Fixed Effects.” Journal of Econometric Methods 0 (October). https://doi.org/10.1515/jem-2022-0025.
Moonesinghe, Ramal, Muin J Khoury, and A. Cecile J. W Janssens. 2007. “Most Published Research Findings Are False—but a Little Replication Goes a Long Way.” PLoS Medicine 4 (February). https://doi.org/10.1371/journal.pmed.0040028.
Muandet, Krikamol, Arash Mehrjou, Si Kai Lee, and Anant Raj. 2019. “Dual Instrumental Variable Regression.” arXiv. https://doi.org/10.48550/ARXIV.1910.12358.
Mudholkar, Govind S., and Yogendra P. Chaubey. 2009. “On Defining -Values.” Statistics &Amp; Probability Letters 79 (September). https://doi.org/10.1016/j.spl.2009.06.006.
Muralidharan, Omkar. 2010. “An Empirical Bayes Mixture Method for Effect Size and False Discovery Rate Estimation.” The Annals of Applied Statistics 4 (March). https://doi.org/10.1214/09-aoas276.
Neyapti, Bilin. 2010. “Fiscal Decentralization and Deficits: International Evidence.” European Journal of Political Economy 26 (June). https://doi.org/10.1016/j.ejpoleco.2010.01.001.
Ni, Wei-Tou. 2017. “Gravitational Wave Detection in Space.” One Hundred Years of General Relativity, May. https://doi.org/10.1142/9789814635134_0012.
Ohn, Ilsang, and Yongdai Kim. 2022. “Posterior Consistency of Factor Dimensionality in High-Dimensional Sparse Factor Models.” Bayesian Analysis 17 (June). https://doi.org/10.1214/21-ba1261.
Okunogbe, Oyebola, and Victor Pouliquen. 2022. “Technology, Taxation, and Corruption: Evidence from the Introduction of Electronic Tax Filing.” American Economic Journal: Economic Policy 14 (February). https://doi.org/10.1257/pol.20200123.
Omay, Tolga, and Nuri Ucar. 2023. “Testing for Unit Roots in Nonlinear Dynamic Heterogeneous Panels with Logistic Smooth Breaks.” Symmetry 15 (March). https://doi.org/10.3390/sym15030747.
Paciorek, Christopher J., Jeff D. Yanosky, Robin C. Puett, Francine Laden, and Helen H. Suh. 2009. “Practical Large-Scale Spatio-Temporal Modeling of Particulate Matter Concentrations.” The Annals of Applied Statistics 3 (March). https://doi.org/10.1214/08-aoas204.
Palesch, Yuko Y. 2014. “Some Common Misperceptions about <i>p</i> Values.” Stroke 45 (December). https://doi.org/10.1161/strokeaha.114.006138.
Pan, Wei. 2001. “Akaike’s Information Criterion in Generalized Estimating Equations.” Biometrics 57 (March). https://doi.org/10.1111/j.0006-341x.2001.00120.x.
“Panel Data Models with Interactive Fixed Effects.” 2009. Econometrica 77. https://doi.org/10.3982/ecta6135.
Paul, Uttiya, and Tarun Sabarwal. 2023. “Directional Monotone Comparative Statics in Function Spaces.” Economic Theory Bulletin 11 (April). https://doi.org/10.1007/s40505-023-00248-4.
Phillips, Carl V. 2004. “Publication Bias in Situ.” BMC Medical Research Methodology 4 (August). https://doi.org/10.1186/1471-2288-4-20.
“Philosophical Transactions of the Royal Society of London. Series a, Mathematical and Physical Sciences.” n.d. https://doi.org/10.1098/rsta.
Piponi, Dan, Dave Moore, and Joshua V. Dillon. 2020. “Joint Distributions for TensorFlow Probability.” arXiv. https://doi.org/10.48550/ARXIV.2001.11819.
“Police-Public Contact Survey, 2011.” 2014. https://doi.org/10.3886/ICPSR34276.V1.
Poole, Charles. 2001. “Low p-Values or Narrow Confidence Intervals: Which Are More Durable?” Epidemiology 12 (May). https://doi.org/10.1097/00001648-200105000-00005.
Prel, Jean-Baptist du, Gerhard Hommel, Bernd Röhrig, and Maria Blettner. 2009. “Confidence Interval or p-Value? Part 4 of a Series on Evaluation of Scientific Publications.” Deutsches Ärzteblatt International, May. https://doi.org/10.3238/arztebl.2009.0335.
Pritikin, Joshua N., Lance M. Rappaport, and Michael C. Neale. 2017. “Likelihood-Based Confidence Intervals for a Parameter with an Upper or Lower Bound.” Structural Equation Modeling: A Multidisciplinary Journal 24 (January). https://doi.org/10.1080/10705511.2016.1275969.
Qin, Tao, and Tie-Yan Liu. 2013. “Introducing LETOR 4.0 Datasets.” arXiv. https://doi.org/10.48550/ARXIV.1306.2597.
Quah, John K.-H. 2007. “The Comparative Statics of Constrained Optimization Problems.” Econometrica 75 (March). https://doi.org/10.1111/j.1468-0262.2006.00752.x.
Ragin, Charles C. 2006. “Set Relations in Social Research: Evaluating Their Consistency and Coverage.” Political Analysis 14. https://doi.org/10.1093/pan/mpj019.
Rahnenführer, Jörg, Riccardo De Bin, Axel Benner, Federico Ambrogi, Lara Lusa, Anne-Laure Boulesteix, Eugenia Migliavacca, et al. 2023. “Statistical Analysis of High-Dimensional Biomedical Data: A Gentle Introduction to Analytical Goals, Common Approaches and Challenges.” BMC Medicine 21 (May). https://doi.org/10.1186/s12916-023-02858-y.
Rainey, Carlisle. 2014. “Arguing for a Negligible Effect.” American Journal of Political Science 58 (March). https://doi.org/10.1111/ajps.12102.
Rajaram, Ravi, Jeanette W. Chung, Andrew T. Jones, Mark E. Cohen, Allison R. Dahlke, Clifford Y. Ko, John L. Tarpley, Frank R. Lewis, David B. Hoyt, and Karl Y. Bilimoria. 2014. “Association of the 2011 ACGME Resident Duty Hour Reform with General Surgery Patient Outcomes and with Resident Examination Performance.” JAMA 312 (December). https://doi.org/10.1001/jama.2014.15277.
Roberts, Seth. 2004. “Self-Experimentation as a Source of New Ideas: Ten Examples about Sleep, Mood, Health, and Weight.” Behavioral and Brain Sciences 27 (April). https://doi.org/10.1017/s0140525x04000068.
Rohlfing, Ingo, and Christina Isabel Zuber. 2019. “Check Your Truth Conditions! Clarifying the Relationship Between Theories of Causation and Social Science Methods for Causal Inference.” Sociological Methods &Amp; Research 50 (February). https://doi.org/10.1177/0049124119826156.
Romano, Joseph P., Azeem M. Shaikh, and Michael Wolf. 2010. “Hypothesis Testing in Econometrics.” Annual Review of Economics 2 (September). https://doi.org/10.1146/annurev.economics.102308.124342.
Ross, Catherine E. 1996. “Work, Family, and Well-Being in the United States, 1990.” https://doi.org/10.3886/ICPSR06666.
Ross, Cody T., Bruce Winterhalder, and Richard McElreath. 2018. “Resolution of Apparent Paradoxes in the Race-Specific Frequency of Use-of-Force by Police.” Palgrave Communications 4 (June). https://doi.org/10.1057/s41599-018-0110-z.
———. 2020. “Racial Disparities in Police Use of Deadly Force Against Unarmed Individuals Persist After Appropriately Benchmarking Shooting Data on Violent Crime Rates.” Social Psychological and Personality Science 12 (June). https://doi.org/10.1177/1948550620916071.
Rothman, Kenneth J. 2014. “Six Persistent Research Misconceptions.” Journal of General Internal Medicine 29 (January). https://doi.org/10.1007/s11606-013-2755-z.
Rousseeuw, Peter J., and Christophe Croux. 1993. “Alternatives to the Median Absolute Deviation.” Journal of the American Statistical Association 88 (December). https://doi.org/10.1080/01621459.1993.10476408.
Rousseeuw, Peter J., and Bert C. van Zomeren. 1990. “Unmasking Multivariate Outliers and Leverage Points.” Journal of the American Statistical Association 85 (September). https://doi.org/10.1080/01621459.1990.10474920.
Roy, Sunanda, and Tarun Sabarwal. 2010. “Monotone Comparative Statics for Games with Strategic Substitutes.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3323084.
Rubin, Mark. 2017. “Do <i>p</i> Values Lose Their Meaning in Exploratory Analyses? It Depends How You Define the Familywise Error Rate.” Review of General Psychology 21 (September). https://doi.org/10.1037/gpr0000123.
Rue, Håvard, Sara Martino, and Nicolas Chopin. 2009. “Approximate Bayesian Inference for Latent Gaussian Models by Using Integrated Nested Laplace Approximations.” Journal of the Royal Statistical Society Series B: Statistical Methodology 71 (April). https://doi.org/10.1111/j.1467-9868.2008.00700.x.
Ruscitti, Francesco, and Ram Sewak Dubey. 2015. “Monotone Comparative Statics in General Equilibrium.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2653388.
Sainani, Kristin L. 2012. “Clinical Versus Statistical Significance.” PM&amp;R 4 (June). https://doi.org/10.1016/j.pmrj.2012.04.014.
Santos, Léonard, Guillaume Thirel, and Charles Perrin. 2018. “Technical Note: Pitfalls in Using Log-Transformed Flows Within the KGE Criterion.” Hydrology and Earth System Sciences 22 (August). https://doi.org/10.5194/hess-22-4583-2018.
Savitz, D. A., and A. F. Olshan. 1998. “Describing Data Requires No Adjustment for Multiple Comparisons: A Reply from Savitz and Olshan.” American Journal of Epidemiology 147 (May). https://doi.org/10.1093/oxfordjournals.aje.a009532.
Scheel, Anne M., Mitchell R. M. J. Schijen, and Daniël Lakens. 2021. “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” Advances in Methods and Practices in Psychological Science 4 (April). https://doi.org/10.1177/25152459211007467.
Schlomer, Gabriel L., Sheri Bauman, and Noel A. Card. 2010. “Best Practices for Missing Data Management in Counseling Psychology.” Journal of Counseling Psychology 57. https://doi.org/10.1037/a0018082.
Schmidt, Frank L., and John E. Hunter. 2015. “Methods of Meta-Analysis: Correcting Error and Bias in Research Findings.” https://doi.org/10.4135/9781483398105.
Schotman, Peter C., and Herman K. Van Dijk. 1991. “On Bayesian Routes to Unit Roots.” Journal of Applied Econometrics 6 (October). https://doi.org/10.1002/jae.3950060407.
Schrodt, Philip A. 2013. “Seven Deadly Sins of Contemporary Quantitative Political Analysis.” Journal of Peace Research 51 (October). https://doi.org/10.1177/0022343313499597.
Schupp, Jürgen, Jan Goebel, Martin Kroh, Carsten Schröder, Charlotte Bartels, Klaudia Erhardt, Alexandra Fedorets, et al. 2017. “Sozio-Oekonomisches Panel (SOEP), Daten Der Jahre 1984-2016.” https://doi.org/10.5684/SOEP.V33.
“Science.” n.d. https://doi.org/10.1126/science.
Sen, Pranab Kumar. 1968. “Estimates of the Regression Coefficient Based on Kendall’s Tau.” Journal of the American Statistical Association 63 (December). https://doi.org/10.1080/01621459.1968.10480934.
Shao, Jun, and Huaibao Feng. 2007. “Group Sequential t-Test for Clinical Trials with Small Sample Sizes Across Stages.” Contemporary Clinical Trials 28 (September). https://doi.org/10.1016/j.cct.2007.02.006.
Shirai, Koji. 2010. “Monotone Comparative Statics of Characteristic Demand.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1553547.
Simar, Léopold, and Paul W. Wilson. 2007. “Estimation and Inference in Two-Stage, Semi-Parametric Models of Production Processes.” Journal of Econometrics 136 (January). https://doi.org/10.1016/j.jeconom.2005.07.009.
Simonsohn, Uri, Leif D. Nelson, and Joseph P. Simmons. 2014. “<I>p</i>-Curve and Effect Size.” Perspectives on Psychological Science 9 (November). https://doi.org/10.1177/1745691614553988.
Singh, Rahul, Maneesh Sahani, and Arthur Gretton. 2019. “Kernel Instrumental Variable Regression.” arXiv. https://doi.org/10.48550/ARXIV.1906.00232.
Singpurwalla, Nozer D. 2006. “Reliability and Risk.” Wiley Series in Probability and Statistics, August. https://doi.org/10.1002/9780470060346.
Skene, Simon S., and Michael G. Kenward. 2010. “The Analysis of Very Small Samples of Repeated Measurements i: An Adjusted Sandwich Estimator.” Statistics in Medicine 29 (September). https://doi.org/10.1002/sim.4073.
Smaldino, Paul E., and Richard McElreath. 2016. “The Natural Selection of Bad Science.” Royal Society Open Science 3 (September). https://doi.org/10.1098/rsos.160384.
SOBER, ELLIOTT. 2004. “A Modest Proposal*.” Philosophy and Phenomenological Research 68 (March). https://doi.org/10.1111/j.1933-1592.2004.tb00361.x.
Somaini, Paulo, and Frank A. Wolak. 2016. “An Algorithm to Estimate the Two-Way Fixed Effects Model.” Journal of Econometric Methods 5 (January). https://doi.org/10.1515/jem-2014-0008.
Soukho, A. Kaya, A. K. Traoré, I. B. Diall, D. Sy, M. Dembélé, B. D. Camara, N. Tolo, et al. 2019. “Clinic Evaluation of Heart Failure of Old People in the Department of Internal Medicine of Point g University Hospital from 2008 to 2012.” Open Journal of Internal Medicine 09. https://doi.org/10.4236/ojim.2019.93012.
Sousa, Wesley O. de, Lincey E. Sousa, Fátima R. J. da Silva, Wildio I. da Graça Santos, and Rodrigo Aranda. 2019. “Composition and Structure of the Frugivorous Butterfly Community (Lepidoptera: Nymphalidae) at the Serra Azul State Park (PESA), Mato Grosso, Brazil.” Zoologia 36 (May). https://doi.org/10.3897/zoologia.36.e27708.
South, L. F., C. J. Oates, A. Mira, and C. Drovandi. 2023. “Regularized Zero-Variance Control Variates.” Bayesian Analysis 18 (September). https://doi.org/10.1214/22-ba1328.
Spanos, Aris. 1984. “Probability Theory and Statistical Inference,” February. https://doi.org/10.1017/cbo9780511754081.
Speiser, Jaime Lynn, Michael E. Miller, Janet Tooze, and Edward Ip. 2019. “A Comparison of Random Forest Variable Selection Methods for Classification Prediction Modeling.” Expert Systems with Applications 134 (November). https://doi.org/10.1016/j.eswa.2019.05.028.
Stahel, Werner A. 2021. “New Relevance and Significance Measures to Replace p-Values.” PLOS ONE 16 (June). https://doi.org/10.1371/journal.pone.0252991.
Stang, Andreas, Charles Poole, and Oliver Kuss. 2010. “The Ongoing Tyranny of Statistical Significance Testing in Biomedical Research.” European Journal of Epidemiology 25 (March). https://doi.org/10.1007/s10654-010-9440-x.
Stanley, T. D., and Hristos Doucouliagos. 2022. “Harnessing the Power of Excess Statistical Significance: Weighted and Iterative Least Squares.” Psychological Methods, May. https://doi.org/10.1037/met0000502.
Stefan, Angelika M., and Felix D. Schönbrodt. 2023. “Big Little Lies: A Compendium and Simulation of<i>p</i>-Hacking Strategies.” Royal Society Open Science 10 (February). https://doi.org/10.1098/rsos.220346.
Stock, James H, and Francesco Trebbi. 2003. “Retrospectives: Who Invented Instrumental Variable Regression?” Journal of Economic Perspectives 17 (August). https://doi.org/10.1257/089533003769204416.
Stock, James H, and Mark W Watson. 2002. “Forecasting Using Principal Components from a Large Number of Predictors.” Journal of the American Statistical Association 97 (December). https://doi.org/10.1198/016214502388618960.
Strassen, V. 1964. “An Invariance Principle for the Law of the Iterated Logarithm.” Zeitschrift f�r Wahrscheinlichkeitstheorie Und Verwandte Gebiete 3. https://doi.org/10.1007/bf00534910.
“Structural Econometric Models.” 2013. Advances in Econometrics, December. https://doi.org/10.1108/s0731-9053(2013)31.
Strulovici, B. H., and T. A. Weber. 2007. “Monotone Comparative Statics: Geometric Approach.” Journal of Optimization Theory and Applications 137 (December). https://doi.org/10.1007/s10957-007-9339-1.
Sturtz, Sibylle, Uwe Ligges, and Andrew Gelman. 2005. “<B>R2WinBUGS</b>: A Package for Running<b>WinBUGS</b>from<i>r</i>.” Journal of Statistical Software 12. https://doi.org/10.18637/jss.v012.i03.
Sullivan, Gail M., and Richard Feinn. 2012. “Using Effect Size—or Why the <i>p</i> Value Is Not Enough.” Journal of Graduate Medical Education 4 (September). https://doi.org/10.4300/jgme-d-12-00156.1.
Sullivan, Patrick F. 2007. “Spurious Genetic Associations.” Biological Psychiatry 61 (May). https://doi.org/10.1016/j.biopsych.2006.11.010.
Szucs, Denes, and John P. A. Ioannidis. 2017. “Empirical Assessment of Published Effect Sizes and Power in the Recent Cognitive Neuroscience and Psychology Literature.” PLOS Biology 15 (March). https://doi.org/10.1371/journal.pbio.2000797.
Tamer, Elie. 2010. “Partial Identification in Econometrics.” Annual Review of Economics 2 (September). https://doi.org/10.1146/annurev.economics.050708.143401.
Tappin, Ben M, Gordon Pennycook, and David G Rand. 2020. “Thinking Clearly about Causal Inferences of Politically Motivated Reasoning: Why Paradigmatic Study Designs Often Undermine Causal Inference.” Current Opinion in Behavioral Sciences 34 (August). https://doi.org/10.1016/j.cobeha.2020.01.003.
“The Handbook of Research Synthesis and Meta-Analysis.” 2019, June. https://doi.org/10.7758/9781610448864.
“The Oxford Handbook of Bayesian Econometrics.” 2011, September. https://doi.org/10.1093/oxfordhb/9780199559084.001.0001.
Thiem, Alrik. 2019. “Beyond the Facts: Limited Empirical Diversity and Causal Inference in Qualitative Comparative Analysis.” Sociological Methods &Amp; Research 51 (November). https://doi.org/10.1177/0049124119882463.
Thomas, A. C., Samuel L. Ventura, Shane T. Jensen, and Stephen Ma. 2013. “Competing Process Hazard Function Models for Player Ratings in Ice Hockey.” The Annals of Applied Statistics 7 (September). https://doi.org/10.1214/13-aoas646.
Tibugari, H, C Chiduza, AB Mashingaidze, and S Mabasa. 2022. “Reduced Atrazine Doses Combined with Sorghum Aqueous Extracts Inhibit Emergence and Growth of Weeds.” African Journal of Food, Agriculture, Nutrition and Development 22 (May). https://doi.org/10.18697/ajfand.108.19505.
Tomlinson, D. L., J. G. Wilson, C. R. Harris, and D. W. Jeffrey. 1980. “Problems in the Assessment of Heavy-Metal Levels in Estuaries and the Formation of a Pollution Index.” Helgoländer Meeresuntersuchungen 33 (March). https://doi.org/10.1007/bf02414780.
Tomz, Michael, Jason Wittenberg, and Gary King. 2003. “<B>clarify</b>: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software 8. https://doi.org/10.18637/jss.v008.i01.
Trafimow, David. 2019. “A Frequentist Alternative to Significance Testing, p-Values, and Confidence Intervals.” Econometrics 7 (June). https://doi.org/10.3390/econometrics7020026.
Tremblay, Carol Horton, and Victor J. Tremblay. 2010. “The Neglect of Monotone Comparative Statics Methods.” The Journal of Economic Education 41 (March). https://doi.org/10.1080/00220481003617293.
Tsui, Kam-Wah, and Samaradasa Weerahandi. 1989. “Generalized<i>p</i>-Values in Significance Testing of Hypotheses in the Presence of Nuisance Parameters.” Journal of the American Statistical Association 84 (June). https://doi.org/10.1080/01621459.1989.10478810.
Veldkamp, Coosje L. S., Michèle B. Nuijten, Linda Dominguez-Alvarez, Marcel A. L. M. van Assen, and Jelte M. Wicherts. 2014. “Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.” PLoS ONE 9 (December). https://doi.org/10.1371/journal.pone.0114876.
Vevea, Jack L., and Carol M. Woods. 2005. “Publication Bias in Research Synthesis: Sensitivity Analysis Using a Priori Weight Functions.” Psychological Methods 10 (December). https://doi.org/10.1037/1082-989x.10.4.428.
Vosgerau, Joachim, Uri Simonsohn, Leif D. Nelson, and Joseph P. Simmons. 2019. “99.” Journal of Experimental Psychology: General 148 (September). https://doi.org/10.1037/xge0000663.
Vu, Patrick. 2022. “Can the Replication Rate Tell Us about Publication Bias?” arXiv. https://doi.org/10.48550/ARXIV.2206.15023.
Wagenmakers, Eric-Jan. 2007. “A Practical Solution to the Pervasive Problems Ofp Values.” Psychonomic Bulletin &Amp; Review 14 (October). https://doi.org/10.3758/bf03194105.
Wasserstein, Ronald L., and Nicole A. Lazar. 2016. “The ASA Statement on <i>p</i>-Values: Context, Process, and Purpose.” The American Statistician 70 (April). https://doi.org/10.1080/00031305.2016.1154108.
Watzinger, Martin, Thomas A. Fackler, Markus Nagler, and Monika Schnitzer. 2020. “How Antitrust Enforcement Can Spur Innovation: Bell Labs and the 1956 Consent Decree.” American Economic Journal: Economic Policy 12 (November). https://doi.org/10.1257/pol.20190086.
Weinstein, Eli N., and David M. Blei. 2024. “Hierarchical Causal Models.” arXiv. https://doi.org/10.48550/ARXIV.2401.05330.
West, Mike, P. Jeff Harrison, and Helio S. Migon. 1985. “Dynamic Generalized Linear Models and Bayesian Forecasting.” Journal of the American Statistical Association 80 (March). https://doi.org/10.1080/01621459.1985.10477131.
White, Halbert. 2000. “A Reality Check for Data Snooping.” Econometrica 68 (September). https://doi.org/10.1111/1468-0262.00152.
Williams, Richard. 2016. “Understanding and Interpreting Generalized Ordered Logit Models.” The Journal of Mathematical Sociology 40 (January). https://doi.org/10.1080/0022250x.2015.1112384.
Windmeijer, Frank. 2000. “A Finite Sample Correction for the Variance of Linear Two-Step GMM Estimators.” Working Paper Series, November. https://doi.org/10.1920/wp.ifs.2000.0019.
Winkler, Robert L. 1981. “Combining Probability Distributions from Dependent Information Sources.” Management Science 27 (April). https://doi.org/10.1287/mnsc.27.4.479.
Winn, Linda C. 2018. “Book Review: Opening up by Writing It down: How Expressive Writing Improves Health and Eases Emotional Pain.” Dramatherapy 39 (March). https://doi.org/10.1080/02630672.2018.1448098.
Wood, S. N. 2012. “On p-Values for Smooth Components of an Extended Generalized Additive Model.” Biometrika 100 (October). https://doi.org/10.1093/biomet/ass048.
Wood, Simon N. 2003. “Thin Plate Regression Splines.” Journal of the Royal Statistical Society Series B: Statistical Methodology 65 (January). https://doi.org/10.1111/1467-9868.00374.
Woolston, Chris. 2015. “Psychology Journal Bans p Values.” Nature 519 (February). https://doi.org/10.1038/519009f.
Xiao, Zhijie. 2001. “Testing the Null Hypothesis of Stationarity Against an Autoregressive Unit Root Alternative.” Journal of Time Series Analysis 22 (January). https://doi.org/10.1111/1467-9892.00213.
Xie, Fangzheng, Joshua Cape, Carey E. Priebe, and Yanxun Xu. 2022. “Bayesian Sparse Spiked Covariance Model with a Continuous Matrix Shrinkage Prior.” Bayesian Analysis 17 (December). https://doi.org/10.1214/21-ba1292.
Yang, Kaiyu, and Fanhuai Shi. 2023. “Medium- and Long-Term Load Forecasting for Power Plants Based on Causal Inference and Informer.” Applied Sciences 13 (June). https://doi.org/10.3390/app13137696.
Yao, Weixin, and Sijia Xiang. 2016. “Nonparametric and Varying Coefficient Modal Regression.” arXiv. https://doi.org/10.48550/ARXIV.1602.06609.
Ye, Zhi-Sheng, Yili Hong, and Yimeng Xie. 2013. “How Do Heterogeneities in Operating Environments Affect Field Failure Predictions and Test Planning?” The Annals of Applied Statistics 7 (December). https://doi.org/10.1214/13-aoas666.
Yuan, Ming, V. Roshan Joseph, and Hui Zou. 2009. “Structured Variable Selection and Estimation.” The Annals of Applied Statistics 3 (December). https://doi.org/10.1214/09-aoas254.
Zaykin, D. V., Lev A. Zhivotovsky, P. H. Westfall, and B. S. Weir. 2002. “Truncated Product Method for Combining <i>p</i>‐values.” Genetic Epidemiology 22 (January). https://doi.org/10.1002/gepi.0042.
Zhang, Lan, Per A Mykland, and Yacine Aït-Sahalia. 2005. “A Tale of Two Time Scales.” Journal of the American Statistical Association 100 (December). https://doi.org/10.1198/016214505000000169.
Zhang, Yi, and Kosuke Imai. 2023. “Individualized Policy Evaluation and Learning Under Clustered Network Interference.” arXiv. https://doi.org/10.48550/ARXIV.2311.02467.
Zhao, Yize, Changgee Chang, Jingwen Zhang, and Zhengwu Zhang. 2023. “Genetic Underpinnings of Brain Structural Connectome for Young Adults.” Journal of the American Statistical Association 118 (February). https://doi.org/10.1080/01621459.2022.2156349.
Zhou, Rensheng R., Nicoleta Serban, and Nagi Gebraeel. 2011. “Degradation Modeling Applied to Residual Lifetime Prediction Using Functional Data Analysis.” The Annals of Applied Statistics 5 (June). https://doi.org/10.1214/10-aoas448.
Ziliak, Stephen T., and Deirdre N. McCloskey. 2004. “Size Matters: The Standard Error of Regressions in the American Economic Review.” The Journal of Socio-Economics 33 (November). https://doi.org/10.1016/j.socec.2004.09.024.
Zou, Hui. 2006. “The Adaptive Lasso and Its Oracle Properties.” Journal of the American Statistical Association 101 (December). https://doi.org/10.1198/016214506000000735.
Zuberi, Tukufu. 2000. “Deracializing Social Statistics: Problems in the Quantification of Race.” The ANNALS of the American Academy of Political and Social Science 568 (March). https://doi.org/10.1177/000271620056800113.