Automated Syllabus of Causal Inference Papers

Built by Rex W. Douglass @RexDouglass ; Github ; LinkedIn

Papers curated by hand, summaries and taxonomy written by LLMs.

Submit paper to add for review

Introduction

Definition Of Causal Inference

  • Focus on developing causal inference techniques within recommender systems to mitigate bias, promote explanation, and improve generalization, ultimately leading to better decision making and fairer recommendations. (Zhu, Ma, and Li 2023)

  • Use advanced machine learning techniques, particularly those focused on causal inference, to enhance decision-making processes in various fields, especially healthcare, by accurately estimating the causal effects of interventions and treatments from observational data. (Bica et al. 2020)

  • Teach replication to graduate students as a valuable approach to promote active learning of quantitative research skills, enhance your understanding of the scientific method, and contribute to the accumulation of knowledge in a discipline. (Stojmenovska, Bol, and Leopold 2019)

  • Utilize Directed Acyclic Graphs (DAGs) and the Causal Markov Condition to identify causal relationships within complex systems, while acknowledging the importance of the Faithfulness Assumption to ensure that observed independence relations are due to underlying causal structure rather than mere chance. (Gebharter 2017)

  • Carefully consider the underlying causal mechanisms and potential confounding factors before conducting experiments or analyzing observational data, as this can significantly impact the validity and generalizability of your conclusions. (Pfister et al. 2017)

  • Embrace mixed methods designs that combine qualitative and quantitative methods in order to better capture the complexity of causal processes and improve the validity of inferred causal effects. (“Handbook of Causal Analysis for Social Research” 2013)

  • Account for interference between individuals within the same group when conducting causal inference, especially in situations like infectious disease spread or housing voucher allocation, and utilize a two-stage randomization process to develop unbiased estimators for direct, indirect, total, and overall causal effects. (Hudgens and Halloran 2008)

Importance Of Causal Inference

  • Carefully consider the composition of experimental stimuli, specifically the presence or absence of irregularly spelled words, as it significantly impacts the results of lexical decision tasks, while having less influence on pronunciation tasks. (NA?)

Fundamental Principles And Theories

  • Focus on identifying and analyzing direct non-redundant regularities within your data to accurately capture causal relationships. (Andreas and Günther 2024)

  • Utilize the counterfactual or potential outcomes framework when analyzing observational data to ensure accurate causal inferences, while accounting for confounding variables and potential exposure-confounder feedback processes. (Glass et al. 2013)

  • Employ multiple experimental designs, such as parallel, crossover, parallel encouragement, and crossover encouragement designs, to enhance the identification power of causal mechanisms and reduce reliance on untestable assumptions. (Imai, Tingley, and Yamamoto 2012)

  • Adopt a hierarchical causal inference (HCI) model to understand how the human nervous system performs causal inference in perception, as it explains auditory-visual spatial judgments, within-modality and cross-modality oddity detection, and optimal time windows of auditory-visual integration. (Shams and Beierholm 2010)

  • Consider the importance of defining causal parameters within formal economic models, recognizing the limitations of empirical knowledge, and addressing the challenges posed by the identification problem in order to effectively conduct policy analysis and evaluate the impacts of various policies. (Heckman 1999)

  • Carefully consider the distinction between Type 1 and Type 2 direct effects when analyzing causal relationships, as they address different research questions and require different estimation techniques. (NA?)

  • Recognize race as an individual attribute rather than a manipulatable cause, and therefore shift from causal reasoning to associational reasoning in analyzing race-related phenomena. (NA?)

Potential Outcomes Framework

  • Clearly identify and justify the causal estimand - the quantity being estimated - in order to ensure valid causal inferences. (Rubin 2005)

Randomization Techniques

  • Adopt a more cautious approach to interpreting and generalizing results from design-based research, particularly in complex policy domains such as rural electrification, due to concerns regarding external validity, selective reporting, and publication bias. (Ankel-Peters and Schmidt 2023)

  • Use preregistration to enhance the credibility of your clinical trials, as evidenced by the absence of p-hacking in preregistered trials compared to non-preregistered trials. (Decker and Ottaviani 2023)

  • Utilise pre-analysis plans (PAPs) alongside pre-registration to effectively combat p-hacking and publication bias in Randomised Controlled Trials (RCTs). (Brodeur et al. 2022)

  • Avoid conditioning on post-treatment variables in experiments, as doing so can lead to biased estimates of causal effects. (Montgomery, Nyhan, and Torres 2018)

  • Utilise factorial experiments to explore causal interaction, which involves multiple treatments, rather than focusing solely on average treatment effects. (Egami and Imai 2018)

  • Ensure that pre-registration involves a pre-analysis plan (PAP) with sufficient detail to constrain your actions and decisions post-data collection, as this combination effectively reduces p-hacking and publication bias. (Karlan et al. 2016)

  • Avoid using the “short” model in factorial designs, which omits interaction terms, as it increases the chance of incorrect inference and compromises internal validity, and instead opt for the “long” model, which includes all interaction terms, despite its reduced power. (Alatas et al. 2012)

  • Move away from focusing solely on overall average treatment effects and towards exploring heterogeneous treatment effects across subpopulations, thus providing deeper theoretical insight and allowing policymakers to better tailor treatments to specific groups. (Imai and Strauss 2011)

  • Always use pair matching wherever feasible in cluster randomized experiments, as it significantly increases efficiency and power, even in small samples. (Imai, King, and Nall 2009)

  • Critically appraise bias control in individual clinical trials, as the influence of different components like randomization, blinding, and follow-up cannot be accurately predicted. (Gluud 2006)

  • Ensure proper allocation concealment in your studies, as inadequate or unclear allocation concealment can lead to biased and exaggerated estimates of treatment effects. (Schulz 1995)

  • Ensure your study designs enable them to establish causality, while considering alternative explanations, and incorporate multiple measurements of the desired outcomes over time to increase confidence in the efficacy of the intervention. (NA?)

  • Aim to develop mechanistic models that capture the actual components, activities, and organizational features of the mechanism producing the observed phenomenon, rather than relying solely on phenomenal models or how-possibly models. (NA?)

  • Ensure that pre-registration of your studies always involves a pre-analysis plan (PAP) to effectively reduce p-hacking and publication bias. (NA?)

Randomized Controlled Trials (Rcts)

  • Utilise Randomised Controlled Trials (RCTs) wherever possible for accurate measurements of advertising effectiveness, as observational methods, even with extensive data sets, often fail to provide accurate estimations. (Pham and Shen 2017)

Counterfactual Reasoning And Potential Outcomes Model

  • Consider combining doubly robust methods with machine learning techniques to improve the accuracy of estimating average treatment effects in observational studies. (Tan et al. 2022)

  • Clearly state your causal question and outline the assumptions required for your chosen analytical approach to accurately address that question. (Cui et al. 2020)

  • Align your chosen method of causal inference with the underlying philosophical theory of causation that defines the truth conditions of your causal claim, ensuring that the method used is capable of producing evidence that meets the necessary criteria for establishing causality within that theoretical framework. (Rohlfing and Zuber 2019)

  • Utilize directed acyclic graphs (DAGs) to visually represent the underlying causal structure of your research question, allowing them to better identify and address issues of confounding and collider bias. (Pearce and Lawlor 2016)

  • Use the Covariate Balancing Propensity Score (CBPS) method to improve the balance of covariates between treatment and control groups, thereby reducing bias and increasing the efficiency of the IPTW estimator. (Tropp 2015)

  • Adopt an appropriate identification strategy, which involves selecting suitable assumptions and a corresponding research design, to address the inherent identification problem in causal inference. (L. Keele 2015)

  • Adopt a counterfactual model of causality, which involves considering potential outcomes under alternative scenarios, to ensure robust causal inferences in sociological studies. (Gangl 2010)

  • Carefully choose and apply appropriate propensity score-based balancing methods such as stratification, weighting, or matching to effectively control for confounding variables in observational studies, as different methods can yield significantly varying results. (Lunt et al. 2009)

  • Employ a nine-step diagnostic routine to evaluate whether your regression estimates accurately capture the average causal effect, taking into account potential heterogeneity in causal effects due to observable and unobservable factors. (Morgan and Todd 2008)

  • Carefully evaluate the potential impact of model dependency on your counterfactual inferences, particularly when extrapolating beyond the range of the original dataset. (King and Zeng 2006)

  • Aim to develop doubly robust estimators in your statistical analysis, as these estimators remain consistent when either a model for the missingness mechanism or a model for the distribution of the complete data is correctly specified, providing greater protection against potential model misspecifications. (Bang and Robins 2005)

  • Carefully consider the choice of reference treatment, multiple causal factors and causal mechanisms, statistical inference on counterfactual effects, and the importance of exchangeability in estimating causal effects. (Höfler 2005)

  • Utilise the propensity function - a generalisation of the propensity score - to infer causality in observational studies involving non-random treatment assignments. (Imai and Dyk 2004)

  • Avoid reliance on counterfactual reasoning for causal inference, instead opting for a Bayesian decision analytic approach that uses empirically testable and discoverable models. (Dawid 2000)

  • Adopt a counterfactual framework for estimating causal effects from observational data, taking into consideration the potential biases arising from differences in outcomes for treatment and control groups, and variations in treatment effects within these groups. (NA?)

  • Employ entropy balancing as a preprocessing technique to achieve covariate balance in observational studies with binary treatments, as it offers advantages such as improved balance, retention of valuable information, versatility, and computational efficiency. (NA?)

Differencein-Differences Approaches

Synthetic Control Methods

  • Leverage supplemental proxies not included in the construction of a synthetic control (SC) for identification of the SC weights, thereby enabling consistent estimation of the treatment effect. (Shi et al. 2021)

  • Use the synth_runner package to automate the process of running multiple synthetic control estimations using synth, allowing them to conduct placebo estimates in space, provide inference through comparison with the distribution of placebo effects, handle multiple treated units receiving treatment at different time periods, and generate diagnostic plots to evaluate the fit of the synthetic control. (Galiani and Quistorff 2017)

  • Utilise the synthetic control method to create a counterfactual scenario for comparisons in case studies, allowing them to infer causality more effectively. (Grier and Maynard 2016)

  • Utilise the Generalised Synthetic Control (GSC) estimator to improve your ability to accurately measure the impact of treatments in situations where the parallel trends’ assumption required by additive fixed effect models does not hold.’ (Powell 2016)

  • Use the Synthetic Control Method (SCM) to estimate the causal impact of democratization on child mortality, while controlling for various confounding factors like economic development, openness to trade, conflict, rural population, and female education. (Pieters et al. 2016)

  • Utilize the Synthetic Control Method (SCM) when evaluating the impacts of local policy innovations, particularly in situations where there are limited sample sizes or unique circumstances, as it provides a systematic and transparent means of selecting comparisons and estimating counterfactual scenarios. (Sills et al. 2015)

Instrumental Variables And Two Stage Least Squares

  • Carefully evaluate the strength of your instruments, ensure accurate statistical inference for IV estimates, and implement additional validation exercises, such as placebo tests, to bolster the identifying assumptions in order to minimize biases and improve the reliability of your findings. (Lal et al. 2023)

  • Avoid pretesting on the first-stage F-statistic in your instrumental variable (IV) analyses, as it exacerbates bias and distorts inference. Instead, they should opt for screening on the sign of the estimated first stage, which reduces bias while keeping conventional confidence interval coverage intact. (Angrist and Kolesár 2021)

  • Carefully evaluate the strength of your instruments, consider alternative inferential methods, and critically examine the validity of your identifying assumptions to ensure accurate and reliable causal inferences in instrumental variable studies. (Kang et al. 2020)

  • Utilise a binary Imbens-Angrist instrumental variable model with the monotone treatment response assumption to identify the joint distributions of potential outcomes among compliers in order to understand the percentage of persuaded individuals and your statistical characteristics. (Fu, Narasimhan, and Boyd 2020)

  • Ensure your instrumented difference-in-differences (DDIV) models meet specific exclusion, parallel trends, and monotonicity assumptions to accurately estimate a convex combination of average causal effects. (Nikolov and Adelman 2020)

  • Consider using spatial-two stage least squares (S-2SLS) as a general, conservative strategy when dealing with endogenous predictors and potential interdependence in the outcome variable, as it provides consistent estimates of the desired causal effect while accounting for possible outcome interdependence. (Betz, Cook, and Hollenbach 2019)

  • Prioritize the use of sharp instruments over merely strong ones, as sharper instruments enable more accurate complier predictions and tighter bounds on effects in identifiable subgroups, leading to improved understanding of causal relationships. (Kennedy, Balakrishnan, and G’Sell 2018)

  • Carefully consider and attempt to validate all four instrumental variable assumptions - relevance, exclusion restriction, exchangeability, and monotonicity - before applying the instrumental variable method in observational studies. (Lousdal 2018)

  • Consider the potential endogeneity of job satisfaction (JS) when examining its relationship with organizational commitment (OC), and adopt appropriate methods such as instrumental variable frameworks to mitigate bias. (Saridakis et al. 2018)

  • Pay close attention to trends, especially nonlinear ones, when using an interactive instrument strategy, as failure to account for these could lead to incorrect conclusions about causality. (Christian and Barrett 2017)

  • Be aware of the risk of coarsening bias when using instrumental variable (IV) methods, particularly when the treatment variable is coarsened into a binary indicator, as this can lead to upwardly biased estimates of the causal effect. (Marshall 2016)

  • Carefully consider and evaluate the three core assumptions of instrumental variable (IV) analysis before applying it to your work, particularly focusing on finding strong IVs that are highly associated with treatment allocation, not directly linked to the outcome, and not related to measured confounders. (Iwashyna and Kennedy 2013)

  • Prioritize adjusting for output-related covariates rather than exposure-related ones, as the former tend to produce lower bias and higher precision in estimates. (J. Pearl 2011)

  • Carefully choose between the two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI) methods for estimating the causal odds ratio in instrumental variable logistic regression, depending on whether there is unmeasured confounding and its severity, as these methods exhibit varying degrees of bias under different scenarios. (Cai, Small, and Have 2011)

  • Use instrumental variable models for discrete outcomes, which provide set identification rather than point identification, and understand how the size of the identified set depends on the strength and support of the instruments. (“Instrumental Variable Models for Discrete Outcomes” 2010)

  • Consider using stylometric analysis, which involves examining the frequencies of function words and grammatical constructions in a text, to identify the authorship of ambiguous documents. (Stock and Trebbi 2003)

  • Focus on developing robust estimators of treatment parameters based on nonparametric or semiparametric identification procedures, rather than relying solely on distributional assumptions and functional form restrictions, to minimize bias and improve the accuracy of your findings. (Abadie 2000)

  • Ensure your instrumented difference-in-differences (DDIV) models meet specific exclusion, parallel trends, and monotonicity assumptions to accurately estimate a convex combination of average causal effects. (NA?)

Instrumental Variable (Iv) Analysis

  • Utilise Generalised Instrument Variables (GIV) methods to identify dynamic policy functions that align with instrumental variable (IV) constraints in dynamic models with serially correlated unobservables. (Berry and Compiani 2022)

  • Carefully evaluate the validity of proposed instrumental variables before using them to estimate the effects of Catholic schooling on various outcomes, as the commonly used instruments - religious affiliation and proximity to Catholic schools - do not appear to be reliable sources of identification in currently available datasets. (Altonji, Huang, and Taber 2015)

  • Carefully examine the validity of preference-based instrumental variable methods in observational studies, especially regarding the assumptions of independence and homogenous treatment effects, to ensure accurate estimation of treatment effects. (Brookhart and Schneeweiss 2007)

  • Consider using instrumental variable (IV) techniques to address selection bias issues in observational data sets, specifically when studying the effectiveness of mental health care interventions. (Hogan and Lancaster 2004)

  • Carefully consider the assumptions underlying your chosen statistical methods, such as linearity and normality, and explore alternative approaches, including semiparametric models, when necessary to address issues like endogeneity and heteroscedasticity. (n.d.)

Generalized Method Of Moments (Gmm) For Causal Inference

  • Use the Generalized Method of Moments (GMM) approach to perform instrumental variables estimation in non-linear models, such as logistic regression, to address issues of endogeneity and achieve consistent parameter estimates. (Koladjo, Escolano, and Tubert-Bitter 2018)

Structural Equation Modeling

Structural Equation Modeling (Sem)

  • Recognize the importance of distinguishing between manipulable and non-manipulable variables in causal analysis, using Structural Causal Models (SCM) to identify and quantify the effects of non-manipulable factors such as obesity on health outcomes. (Judea Pearl 2018)

  • Utilize a factorial design in survey experiments to identify the overall average treatment effect and the controlled direct effect of treatment fixing a potential mediator, thereby avoiding the selection-on-observable assumption required in the standard mediation approach. (Acharya, Blackwell, and Sen 2018)

  • Utilise the structural causal model (SCM) approach to identify causal effects from multiple heterogeneous datasets, leveraging the principles of structural counterfactuals and structural independences, while carefully considering issues related to policy evaluation, sampling selection bias, and data fusion. (Bareinboim and Pearl 2016)

  • Avoid making arbitrary choices between existing democracy scales and instead opt for a cumulative approach that leverages the measurement efforts of numerous scholars through the creation of a unified democracy score (UDS) that improves confidence in estimates and reduces the impact of idiosyncratic errors. (Pemstein, Meserve, and Melton 2010)

  • Utilize nonparametric structural equation models (SEM) as a coherent mathematical foundation for analyzing causes and counterfactuals, providing a systematic methodology for defining, estimating, and testing causal claims in both experimental and observational studies. (Judea Pearl 2010)

  • Differentiate between types of path dependence, utilizing a formal framework to enhance understanding of historical causality and improve empirical analysis. (S. E. Page 2006)

  • Carefully consider the type of world model they assume in your studies, as different models can lead to varying experimental predictions and outcomes. (Courville, Daw, and Touretzky 2006)

  • Seek to understand and incorporate social mechanisms in your analyses to help distinguish between causation and mere correlation, thus improving causal inference in social science. (Steel 2004)

  • Utilize structural equation models (SEM) to formalize causal assumptions, allowing them to derive closed-form expressions for target quantities, decide if your assumptions are sufficient for obtaining consistent estimates, and suggest additional observations or experiments to improve consistency. (Judea Pearl 2003)

Mediation Analysis

  • Carefully consider the unique characteristics of compositional data, such as the unit-sum constraint and the need for appropriate transformations, when conducting mediation analyses involving high-dimensional and compositional mediators. (Sohn and Li 2019)

  • Consider using inverse odds ratio weighting (IORW) for conducting mediation analysis with multiple mediators, as it offers several advantages such as being universally applicable across different types of regression models, easy implementation, and ability to handle multiple mediators regardless of your scale. (Nguyen et al. 2015)

  • Utilize a six-step procedure to estimate natural direct and indirect effects through multiple mediators, while carefully evaluating the validity of the assumption of nonintertwined causal pathways. (Lange, Rasmussen, and Thygesen 2013)

  • Carefully consider and control for potential sources of bias in your mediation analyses, including mediator-outcome confounding, exposure-mediator interaction, and mediator-outcome confounding affected by the exposure. (Richiardi, Bellocco, and Zugna 2013)

  • Pay close attention to the assumptions required for identifying direct and indirect effects in mediation analysis, particularly regarding the need to control for confounding of the mediator-outcome relation. (VanderWeele and Vansteelandt 2010)

  • Carefully consider the assumptions required for separating direct and indirect effects, and utilize appropriate statistical techniques like the G-computation algorithm to minimize bias in your analyses. (NA?)

Natural Experiments And Quasiexperiments

Regression Discontinuity Design (Rdd)

  • Utilise the “difference-in-discontinuities” approach in your analysis, particularly when dealing with geographical discontinuities. This method helps to account for issues such as compound treatment and sorting around the cutoff, providing more robust estimates of treatment effects. (Butts 2021)

  • Carefully choose between the continuity framework and the local randomization framework for analyzing regression discontinuity designs, considering factors like the nature of the score variable, the presence of covariates, and the desired level of precision in estimation and inference. (Arai, Otsu, and Seo 2021)

  • Use appropriate, modern analysis procedures, restrict focus to studies with sufficient power, and pre-register analysis plans when possible. (Stommes, Aronow, and Sävje 2021)

  • Utilise various identification strategies like regression discontinuity, synthetic control methods, and machine learning techniques to effectively separate correlation from causality in observational studies, thereby increasing the credibility of policy evaluations. (Athey and Imbens 2016)

  • Carefully evaluate the plausibility of the continuity assumption in geographic regression discontinuity (GRD) designs, as well as consider the presence of compound treatments, appropriate measures of distance from cutoffs, and spatial variation in treatment effects, to ensure robust causal inference. (L. J. Keele and Titiunik 2015)

  • Utilise eligibility rules for participation into a programme to assess the validity of non-experimental estimators for the programme effects. (Battistin and Rettore 2008)

  • Understand the distinction between the local randomization assumption and the continuity assumption in the regression discontinuity design, as this difference significantly impacts the validity of the design and the choice of appropriate statistical methods. (NA?)

  • Pay attention to the bias-variance trade-off inherent in regression discontinuity (RD) designs, as it can lead to inaccurate estimates and potentially biased results due to the limited availability of data near the cutoff. (NA?)

Difference-In-Differences (Did) Analysis

  • Carefully select meaningful outcome measures and choose intervention and comparator populations wisely to ensure reliable results when conducting DiD analyses for complex interventions. (Round et al. 2013)

  • Use a difference-in-differences analysis when studying the impact of interventions like vacant lot greening on health and safety outcomes, comparing treated and control groups before and after the intervention, and employing robustness checks and alternative models to ensure validity. (Branas et al. 2011)

Graphical Models: Causal Directed Acyclic Graphs

  • Ensure your samples are representative of the population under investigation, particularly when considering sex-specific diseases like rheumatoid arthritis, where genetic factors may differ significantly between genders. (Wang and Lu 2022)

  • Clearly define your research question, draw a directed acyclic graph (DAG) to represent the causal and temporal relationships between variables, and carefully consider potential confounders, mediators, and effect modifiers to ensure accurate causal inference from observational data. (Laubach et al. 2021)

  • Explicitly model your assumptions regarding the impact of intervening in a system when attempting to estimate causal effects, and that once this is done, causal estimation can be performed entirely within the standard Bayesian paradigm, making it possible to leverage the benefits of both Bayesian inference and the do-calculus. (Lattimore and Rohde 2019)

  • Utilise genetically informed methods for causal inference, such as family-based designs and Mendelian randomization, to enhance the validity of your conclusions in observational studies. (Deaton and Cartwright 2018)

  • Distinguish between “conditioning by intervention” and “conditioning by observation”, as they represent different types of conditional probability and are crucial for accurate causal inference. (“Advances in Knowledge Discovery and Data Mining” 2002)

  • Utilize structural causal models, represented visually via directed acyclic graphs (DAGs), to make your causal assumptions transparent and facilitate accurate causal inference in science studies. (NA?)

Synthetic Control Methods

Placebo Tests

  • Carefully select a rich set of features to approximate the underlying heterogeneity in order to ensure the consistency of the synthetic control method. (Arkhangelsky and Hirshberg 2023)

  • Adopt a design-based approach to synthetic control methods, focusing on treating treatments as stochastic rather than outcomes, which provides a more natural starting point for many causal analyses. (Bottmer et al. 2023)

  • Carefully consider the feasibility, data requirements, contextual requirements, and methodological issues related to the empirical application of synthetic controls, while characterizing the practical settings where synthetic controls may be useful and those where they may fail. (Abadie 2021)

  • Utilise “partially pooled” synthetic control method (SCM) weights to minimize a weighted combination of imbalance for each treated unit separately and the imbalance for the average of the treated units, avoiding bias caused by focusing solely on balancing one of these components. (Ben-Michael, Feller, and Rothstein 2021)

  • Adopt a Bayesian posterior predictive approach to Rubins causal model for comparative case studies, utilizing a dynamic multilevel model with a latent factor term to correct biases and considering heterogeneous and dynamic relationships between covariates and the outcome, thereby improving the precision of causal estimates.’ (Pang, Liu, and Xu 2021)

  • Employ advanced machine learning techniques like random forest algorithms to accurately weather-normalize air pollution data, allowing for robust assessments of the impact of policy interventions like lockdowns on pollution levels. (Cole, Elliott, and Liu 2020)

  • Utilise partially pooled Synthetic Control Method (SCM)’ rather than ‘Separate SCM’ or ‘Pooled SCM’, because it balances both the unit-specific and pooled average pre-treatment outcomes, thereby reducing bias in the estimated Average Treatment Effect on the Treated (ATT)’. (Arkhangelsky and Imbens 2019)

  • Focus on achieving a perfect match on pre-treatment outcomes rather than attempting to match on covariates when implementing the Synthetic Control method, as the bias of the estimator can still be bounded even when the covariates of the treated unit are not in the convex hull of the covariates of the control unit. (Botosaru and Ferman 2019)

  • Consider applying synthetic control methods alongside traditional techniques when evaluating population-level health interventions, especially when randomized controlled trials arent feasible, because these methods offer unique benefits including suitability for small sample sizes and independence from parallel pre-implementation trends.’ (Bouttell et al. 2018)

  • Create synthetic controls for every unit rather than just the treated unit, and use a two-step approach to generate predicted values of the outcome variables for each unit, which together allow for more accurate estimation of policy effects. (Powell 2018)

  • Avoid using the cross-validation technique in synthetic control methods because it is not well-defined, leading to ambiguous estimates of the treatment effect. (Klößner et al. 2018)

  • Utilise a synthetic control approach to estimate the effects of foreign exchange interventions, particularly in cases where a large change in intervention policy is announced. (Chamon, Garcia, and Souza 2017)

  • Avoid cherry picking among various specifications of the Synthetic Control (SC) method, especially when the number of pre-treatment periods is small or moderate, as this can create significant opportunities for specification searching and compromise the credibility of the results. (Adhikari and Alm 2016)

  • Exercise caution when interpreting the identification assumptions required for the Synthetic Control (SC) method, as the SC estimator can be biased if treatment assignment is correlated with unobserved confounders, even when the number of pre-treatment periods is infinite and in settings where one expects an almost perfect pre-treatment fit. (duPont and Noy 2015)

  • Utilise synthetic control methods (SCM) to create a synthetic’ control unit that closely matches the ‘treated’ unit in the pre-treatment period. This methodology provides a systematic and transparent means of constructing an appropriate counterfactual, avoiding the ambiguity associated with choosing comparison groups based on subjective measures of affinity. Additionally, SCM protects against extrapolation issues common in traditional regression models and accounts for time-variant country characteristics (Feenstra 2013)

  • Adopt a Bayesian posterior predictive approach to Rubins causal model for comparative case studies, utilizing a dynamic multilevel model with a latent factor term to correct biases and considering heterogeneous and dynamic relationships between covariates and the outcome, thereby improving the precision of causal estimates.’ (NA?)

Matching Methods For Causal Inference

  • Carefully design list experiments to avoid potential violations of underlying assumptions, and utilize advanced statistical methods such as multivariate regression estimators to improve the efficiency and accuracy of your analyses. (Blair and Imai 2012)

  • Utilise the newly introduced Monotonic Imbalance Bounding’ (MIB) class of matching methods for causal inference, as they possess a wide range of statistically beneficial properties and can significantly enhance inferences compared to the previously established ‘Equal Percent Bias Reducing’ (EPBR) based matching methods.’ (Iacus, King, and Porro 2011)

Nearest Neighbor Matching

  • Exercise caution when attempting to estimate the full dose-response function (DRF) with a continuous treatment in an observational study, as even in the simplest settings, standard methods may exhibit unacceptable statistical properties. (Zhao, Dyk, and Imai 2020)

Adjustment For Selection Bias

  • Maximize transparency in reporting reaction time data pre-processing steps to ensure reproducibility and valid interpretation of results. (Loenneker et al. 2024)

  • Consider the role of data-sharing policies and data types in influencing the presence of p-hacking and publication bias in your analyses. (Brodeur, Cook, and Neisser 2024)

  • Be aware of publication bias and p-hacking when interpreting study findings, and they can mitigate these issues through careful experimental design, transparent reporting practices, and rigorous statistical analysis. (Brodeur et al. 2023)

  • Utilize the caliper test when comparing the number of test statistics in a narrow range above and below a statistical significance threshold, as it enables them to control for various factors such as co-editor handling, manuscript quality, and author characteristics. (Brodeur et al. 2023)

  • Utilise a causal framework for cross-cultural research, requiring them to specify your theoretical estimand, create a generative model of the evidence, generate a model of how populations may differ, and then devise a tailored estimation strategy to enable learning from the data. (Deffner, Rohrer, and McElreath 2022)

  • Consider setting aside a placebo sample’, a subset of data immune from the hypothesised cause-and-effect relationship, to help identify and mitigate biases in observational studies. (Ye and Durrett 2022)

  • Carefully consider the choice of aggregation method when creating democracy indices, as different methods can lead to varying levels of accuracy and potential bias in subsequent statistical analyses. (Gründler and Krieger 2022)

  • Use causal diagrams to articulate theoretical concepts and your relationships to estimated proxies, then apply established rules to determine which conclusions can be rigorously supported, allowing for valid tests for the existence and direction of theorized effects even with imperfect proxies. (Duarte et al. 2021)

  • Carefully consider and control for publication bias, experimental design, and subject pool characteristics when conducting meta-analyses of individual discount rates, as these factors significantly affect the variability in reported estimates. (Matousek, Havranek, and Irsova 2021)

  • Utilise a conditional independence assumption to account for the unobservable nature of pre-treatment Covid-19 cases, allowing for the estimation of the impact of Covid-related policies on observed outcomes. (Callaway and Li 2020)

  • Avoid selecting cases solely based on the dependent variable, as doing so can introduce selection bias and potentially invalidate the conclusions drawn. (Noe 2020)

  • Aim to minimise the risk of bias due to missing results in meta-analyses by employing comprehensive search strategies, utilising prospective approaches, and applying robust methods for assessing the risk of bias. (M. J. Page et al. 2020)

  • Avoid dropping subjects based on a manipulation check, as it can lead to biased estimates or undermine identification of causal effects, and instead focus on intent-to-treat effects, which are point identified. (Aronow, Baron, and Pinson 2019)

  • Distinguish between exploratory and confirmatory objectives in your studies, recognizing that exploratory analyses require flexibility and may yield biased statistical inferences, whereas confirmatory analyses demand rigid prespecification and allow for valid statistical inferences. (Tong 2019)

  • Structure your tests of design so that the responsibility lies with them to positively demonstrate that the data is consistent with your identification assumptions or theory, starting with the initial hypothesis that the data is inconsistent with a valid research design, and only rejecting this hypothesis if they provide sufficient statistical evidence in favor of data consistent with a valid design. (Hartman and Hidalgo 2018)

  • Leverage exogenous variations in the proportion of individuals sampled (θ) to identify the marginal sampling efficiency, which can help them infer information about the population mean (μ) and improve the accuracy of your estimates. (Burger and McLaren 2017)

  • Avoid using lagged explanatory variables as a way to deal with endogeneity, since it merely shifts the potential for bias to a different point in the data generation process, and instead consider alternative methods such as randomized controlled trials, field experiments, instrumental variables, regression discontinuity, and differences-in-differences estimation. (Bellemare, Masaki, and Pepinsky 2017)

  • Carefully balance the need for reducing confounding variables against maintaining sufficient statistical power to prevent exaggeration of effect sizes, especially in cases where publication bias favours statistically significant results. (Athey and Imbens 2016)

  • Carefully balance the need for reducing confounding variables against maintaining sufficient statistical power to prevent exaggeration of effect sizes, especially in cases where publication bias favours statistically significant results. (Athey and Imbens 2016)

  • Avoid conditioning on posttreatment variables in order to prevent introducing bias and instead utilize the controlled direct effect (CDE) to accurately assess causal mechanisms. (ACHARYA, BLACKWELL, and SEN 2016)

  • Consider using pre-analysis plans (PAPs) to minimize issues of data and specification mining, and to provide a record of the full set of planned analyses. (“Editorial Statement on Negative Findings” 2015)

  • Utilise the Covariate Balancing Propensity Score (CBPS) methodology to estimate the inverse probability weights for Marginal Structural Models (MSMs) in longitudinal data analysis. This methodology ensures better covariate balance and reduces sensitivity to misspecifications in the treatment assignment model, thus enhancing the robustness and accuracy of the causal inferences drawn from the MSMs. (Imai and Ratkovic 2015)

  • Utilise either systematic replication studies or meta-studies to identify the conditional probability of publication as a function of a studys results, thereby enabling them to correct for selective publication bias and improve the accuracy of your inferences.’ (Fithian, Sun, and Taylor 2014)

  • Exercise caution when conditioning on all observed covariates, as this approach might inadvertently introduce bias due to M-bias structures, which can be identified and avoided using graphical methods. (Judea Pearl 2013)

  • Be cautious when making causal inferences involving gestational age as a mediating variable, since it can introduce significant bias due to unmeasured confounding factors. (Wilcox, Weinberg, and Basso 2011)

  • Use Normalization Process Theory (NPT) as a framework to analyze and understand the dynamic processes involved in the implementation of complex interventions and health technologies, focusing on the four theoretical constructs of sense-making, cognitive participation, collective action, and reflexive monitoring. (May et al. 2011)

  • Pay close attention to the potential impact of differential measurement error on your analyses, particularly in situations where the treatment variable is measured after the outcome occurs or where both the outcome and measurement error are correlated with an unobserved variable. (Imai and Yamamoto 2010)

  • Carefully consider the impact of information disclosure on the equilibrium of a system, as the choice of transcript structure can affect the distribution of desirabilities of positions to which students are matched in the job market. (Ostrovsky and Schwarz 2010)

  • Carefully consider the potential for selection bias when using restricted source populations in cohort studies, particularly when the exposure and risk factor are strongly associated with selection and the unmeasured risk factor is associated with a high disease hazard ratio. (Pizzi et al. 2010)

  • Carefully consider potential biases throughout the entire process of conducting a systematic review, including searching for evidence, selecting studies, obtaining accurate data, and combining studies, to ensure robust and reliable conclusions. (Tricco et al. 2008)

  • Recognize the complexity of causal relationships, where multiple factors contribute to a particular outcome, and carefully consider potential confounding variables and alternative explanations when interpreting epidemiological studies. (Rothman and Greenland 2005)

  • Enhance the validity and cross-cultural comparability of measurement in survey research by utilizing anchoring vignettes to estimate and correct for differential item functioning (DIF) in survey responses. (KING et al. 2004)

  • Report all results, including alternative analyses and comparisons, to minimize publication bias in situ (PBIS) and ensure the validity of the health science literature. (Phillips 2004)

  • Ensure complete and transparent reporting of all outcomes, including those that are not statistically significant, to avoid outcome reporting bias in randomized trials. (Chan 2004)

  • Employ an information-scoring’ system that encourages truthful answers by assigning high scores to answers that are more common than collectively predicted, thereby removing bias towards consensus and allowing for accurate evaluation of subjective data.’ (Prelec 2004)

  • Ensure consistency between your theoretical models and statistical models, particularly regarding the functional relationships between dependent and independent variables, to avoid issues such as strategic misspecification and omitted variable bias. (NA?)

  • Use a maximum-likelihood estimator for selection models with dichotomous dependent variables when identical factors affect the selection equation and the equation of interest, allowing them to avoid making distributional assumptions about the residuals alone or adding a theoretically unjustified variable to the selection equation. (NA?)

  • Avoid conditioning on posttreatment variables in order to prevent introducing bias and instead utilize the controlled direct effect (CDE) to accurately assess causal mechanisms. (NA?)

  • Prioritize transparent communication and collaboration with other scientists, particularly during the replication phase, while recognizing that transparency alone may not guarantee accurate results due to inherent limitations in experimental designs and data quality. (NA?)

Causal Machine Learning

  • Carefully distinguish between different types of covariates - those that are common causes of treatment and outcome, treatment-inducing confounding proxies, and outcome-inducing confounding proxies - in order to effectively address issues of unmeasured confounding and improve causal inference in observational studies. (Cui et al. 2023)

  • Utilize the DoWhy library, which offers a comprehensive approach to causal inference by guiding users through the four essential steps of modeling, identifying, estimating, and refuting causal effects, while also incorporating robustness checks and sensitivity analyses. (Sharma and Kiciman 2020)

  • Acknowledge the limitations of measurable covariates as proxies for true confounding mechanisms and use proximal causal learning techniques to improve causal inferences in situations where traditional exchangeability assumptions fail. (Tchetgen et al. 2020)

  • Focus on developing and implementing computationally intensive automated search algorithms over large search spaces to overcome challenges like large numbers of variables, small sample sizes, and potential presence of unmeasured causes in causal inference studies. (Cinelli and Hazlett 2019)

  • Follow four experimental design principles when investigating constraints on unsupervised category learning: providing extensive practice, ensuring an existing underlying category structure, avoiding binary-valued dimensions, and comparing the ability to learn unidimensional versus nondimensional rules. (Ashby, Queller, and Berretty 1999)

  • Consider treating the Take the Best’ (TTB) model as a sequential-sampling process that stops once any evidence in favour of a decision is discovered, while the ‘Rational’ (RAT) model is treated as a sequential-sampling process that ceases only after all available information has been evaluated. (NA?)

  • Utilize meta-theoretic or meta-modelling techniques to create theoretically grounded experimental designs that can effectively distinguish between competing hypotheses, thereby enabling stronger inferences and more accurate predictions. (NA?)

  • Carefully control for acoustic factors when investigating the impact of statistical learning on word segmentation in infants, as demonstrated by the authors use of ten different three-syllable pseudo-words with equal transitional probabilities and no morphological or prosodic cues to word boundaries.’ (NA?)

  • Consider treating error monitoring as a decision process where participants make judgements based on imperfect evidence, allowing them to distinguish between accumulated decision evidence and categorical decision output. (NA?)

Causal Discovery Algorithms

  • Carefully select your target population, design appropriate treatments, utilize suitable randomization strategies, and accurately measure outcomes to effectively investigate social influence in networks through randomized experiments. (Taylor and Eckles 2017)

Pc Algorithm

  • Utilize “generalized Bayesian networks” to analyze non-classical correlations in arbitrary causal structures, allowing for resources from any generalized probabilistic theory, such as quantum theory or Popescu-Rohrlich boxes. (NA?)

Causality With Multiple Treatments Or Groups

  • Use causal graphs to explicitly define your identification strategy and assess which estimated parameters can be given causal interpretations, rather than blindly assigning causal meaning to all coefficients in a regression model. (L. Keele, Stevenson, and Elwert 2019)

Causality In Longitudinal Studies

  • Aim to construct rigorous research designs that can distinguish causal effects of interest from other potential explanatory factors, such as historical legacies, structural, geographical, or institutional factors, in order to establish credible causal inferences in historical persistence studies. (Cirone and Pepinsky 2022)

  • Consider adopting the proximal causal inference (PCI) framework for longitudinal studies, especially when the sequential randomization assumption (SRA) cannot be met due to unmeasured time-varying confounding. (Ying et al. 2021)

  • Carefully consider the assumptions underlying your chosen statistical models, particularly when dealing with time-series cross-sectional (TSCS) data, and explore alternative methods such as structural nested mean models (SNMMs) and marginal structural models with inverse probability of treatment weighting (MSMs with IPTWs) to minimize bias and improve the accuracy of your findings. (BLACKWELL and GLYNN 2018)

  • Carefully distinguish between the overall effect of exposure on disease and its direct effect, especially when dealing with longitudinal studies involving time-dependent covariates that might act as both confounders and intermediate variables. (NA?)

Panel Data Analysis For Causality

  • Utilise a nonlinear two-way fixed effects panel model that permits unobserved individual heterogeneity in slopes interacting with covariates and an unknown, flexibly specified link function to accurately capture the distributional causal effects of covariates and avoid potential misspecification errors due to imposing a known link function. (D’Haultfœuille et al. 2022)

Causality Under Missing Data

  • Carefully consider the potential missingness mechanisms (MCAR, MAR, or NMAR) in your study designs and choose appropriate statistical methods accordingly, while utilizing graphical representations (such as m-graphs) to visualize and communicate these assumptions. (Thoemmes and Mohan 2015)

  • Carefully consider the possibility of non-ignorable missing data in randomized experiments, and explore alternative identification and estimation strategies to address this issue, such as the proposed method based on the assumption of non-ignorable missing outcomes. (Imai 2008)

  • Separate the identification and statistical components of inference, enabling them to establish a domain of consensus among those holding different views on appropriate assumptions while making clear the limitations of the available data. (“Partial Identification of Probability Distributions” 2003)

  • Separate the identification and statistical components of inference, enabling them to establish a domain of consensus among those holding different beliefs about appropriate assumptions while making clear the limitations of the available data. (NA?)

Robustness Checks For Causal Inference

  • Avoid using the conservative (QCA-CS) and intermediate (QCA-IS) solution types of Qualitative Comparative Analysis (QCA) due to your tendency to introduce artificial data that can lead to significant causal fallacies, instead opting for the parsimonious solution type (QCA-PS) which does not suffer from this issue. (Thiem 2019)

  • Carefully evaluate and address the sources of estimation error in your studies, including sample selection biases, treatment imbalances, and the impact of observed and unobserved covariates, to improve the accuracy and reliability of your causal inferences. (Imai, King, and Stuart 2008)

  • Carefully select appropriate proxies for your variables, ensure that your proxies accurately represent the underlying concept, and avoid making unfounded assumptions about causality. (Collier, Hoeffler, and Söderbom 2004)

Sensitivity Analysis

  • Perform sensitivity analyses to estimate the potential impact of unmeasured confounders on the measured causal association between a binary exposure and a binary outcome, particularly in non-randomized studies. (Groenwold et al. 2009)

Empirical Applications And Case Studies

  • Adopt the Registered Report (RR) publication format to reduce publication bias and increase transparency in your research, ultimately improving the validity and reliability of your findings. (Scheel, Schijen, and Lakens 2021)

  • Consider the unique challenges and opportunities presented by text data in causal inference, including the need for new assumptions to ensure valid causal inferences, and the potential benefits of integrating causal formalisms to improve the reliability and interpretability of NLP methods. (Adragna et al. 2020)

  • Recognize the limitations of relying solely on statistical data and incorporate subjective factors and causal models (often implemented as Bayesian networks) to enhance the validity and applicability of your findings. (Fenton and Neil 2018)

  • Focus on developing a comprehensive mathematical model that integrates various aspects of scientific discovery, including hypothesis formation, replication, publication bias, and variations in research quality, to enhance the reliability of research findings. (McElreath and Smaldino 2015)

  • Aim to conduct rigorous experiments or quasi-experiments whenever possible, as these approaches tend to provide stronger inferences than large-scale regression analyses. (Beck 2010)

  • Recognize the inherent limitations and advantages of case study methods compared to non-case study methods, and appreciate the value of combining both approaches to enhance the validity and robustness of findings. (GERRING 2004)

  • Carefully evaluate the relationship between counterfactual scenarios and estimated conditional probabilities, recognizing the potential impact of random factors and measurement errors on causal inferences. (Sekhon 2004)

  • Consider controlling for multiple factors when selecting stimuli, including imagery, concreteness, frequency, and orthography, to ensure accurate representation of the target population and reduce potential biases. (Friendly et al. 1982)

  • Be aware of the potential for inflated replication power estimates caused by the non-linear nature of the replication probability function, particularly when dealing with low-powered original studies. (Armitage, McPherson, and Rowe 1969)

  • Carefully consider the unique ethical, methodological, and data validity challenges posed by internet-based research, including subject recruitment, informed consent, subject anonymity, data security, and generalizability. (NA?)

Applications In Economics

  • Carefully justify the parallel trends assumption for the specific functional form chosen for the analysis, especially in settings where treatment is not (as-if) randomly assigned, as the sensitivity of parallel trends to functional form depends on the underlying structure of the population. (Roth and Sant’Anna 2023)

  • Consider conducting randomized experiments to investigate the impact of e-filing adoption on tax compliance costs, tax payments, and bribe payments, particularly in developing countries where traditional in-person tax submission processes may create opportunities for corruption. (Okunogbe and Pouliquen 2022)

  • Carefully consider the trade-offs between control, context, and representativeness when selecting the appropriate type of experiment (or sequence of complementary experiments) for your studies, taking into account the specific goals and requirements of your investigation. (Palm-Forster and Messer 2021)

  • Utilise an event study and Regression Discontinuity Design (RDD) framework to enable the estimation of causal effects when studying the impact of EIP-1559 on blockchain characteristics, whilst controlling for confounding factors like price volatility, network instability, and time trends. (Ante 2021)

  • Be aware of the potential for p-hacking and publication bias in your analysis, particularly when using certain methods such as Instrumental Variables (IV) and Difference-in-Differences (DID), and consider employing Randomized Control Trials (RCT) or Regression Discontinuity Design (RDD) instead, as these methods seem to produce more reliable results. (Brodeur, Cook, and Heyes 2020)

  • Consider using alternative loan demand controls, such as industry-location-size-time fixed effects, to improve the accuracy of your estimates when analyzing supply-related banking shocks, especially when dealing with samples containing a high percentage of single-bank firms. (Jakovljević, Degryse, and Ongena 2020)

  • Utilise the Marginal Value of Public Funds’ (MVPF) framework to map empirical estimates of causal effects of a public expenditure or tax change to welfare analysis of that policy change. (Finkelstein and Hendren 2020)

  • Be aware of the potential for p-hacking and publication bias in your analysis, particularly when using certain methods such as Instrumental Variables (IV) and Difference-in-Differences (DID), and consider employing Randomized Control Trials (RCT) or Regression Discontinuity Design (RDD) instead, as these methods seem to produce more reliable results. (Brodeur, Cook, and Heyes 2020)

  • Consider utilizing factorial randomized control trials alongside indirect survey techniques like endorsement and randomized response experiments to accurately measure sensitive topics like combatant support in wartime settings. (LYALL, ZHOU, and IMAI 2019)

  • Consider conducting meta-analyses of randomized controlled trials (RCTs) to assess the effectiveness of nudges in improving tax compliance, taking into account the heterogeneous nature of nudge designs and the potential for publication bias. (Antinyan and Asatryan 2019)

  • Focus on the concavity or convexity of policy functions when studying distributional comparative statics, as this is crucial for understanding how changes in exogenous distributions affect endogenous distributions in models with optimizing agents. (Jensen 2017)

  • Consider the unique features of cumulative discovery within the academic research community, focusing on the role of intellectual freedom and exploration in driving innovation, and evaluating the impact of openness on both the level and composition of follow-on research. (Murray et al. 2016)

  • Carefully consider the impact of search frictions on participation decisions in skilled labor markets, as these frictions can lead to acceptance-constrained equilibria where matching concerns, rather than investment costs, deter individuals from investing and participating. (Bidner, Roger, and Moses 2016)

  • Utilize the Synthetic Control Method (SCM) to mitigate endogeneity issues and provide more accurate estimates of the causal effects of universities on regional economic development. (Bonander et al. 2016)

  • Utilise a difference-in-differences propensity score matching approach to evaluate the causal effect of foreign acquisitions on wages, allowing for significant heterogeneity in the post-acquisition wage effect depending on the nationality of the foreign acquirer and the skill group of workers. (Girma and Görg 2016)

  • Utilise a combination of Vector Autoregressive (VAR) analysis and High Frequency Identification (HFI) methods when studying the relationship between monetary policy and economic activity. This combined approach helps to overcome issues related to simultaneity and endogeneity, providing a more robust understanding of the dynamics involved. (Gertler and Karadi 2015)

  • Pool synthetic control estimates in settings with recurring treatment and variable treatment intensity, converting the estimates to elasticities by scaling them by the size of the minimum wage changes, and then aggregating these elasticities across events. (Dube and Zipperer 2015)

  • Utilize conjoint analysis to enable the decomposition of composite treatment effects in survey experiments, allowing for the simultaneous estimation of the causal effects of multiple treatment components. (Hainmueller, Hopkins, and Yamamoto 2014)

  • Consider the potential impact of future rent expectations on behavior, as demonstrated by the authors finding that the “golden goose” effect led to a reduction in corruption by approximately 64% in the context of India’s employment guarantee scheme.’ (Niehaus and Sukhtankar 2013)

  • Utilize fixed effects models when analyzing longitudinal data to effectively control for time-invariant confounding, thereby improving causal estimation. (Gunasekara et al. 2013)

  • Use regime-switching models to capture the varying effects of fiscal policy across different stages of the business cycle, allowing for more accurate estimation of fiscal multipliers. (Auerbach and Gorodnichenko 2012)

  • Carefully consider the use of real incentives versus hypothetical choice in experiments, as real incentives tend to result in stronger aversion and less noise in responses. (Abdellaoui et al. 2011)

  • Consider utilizing online labor markets for conducting experiments, as they provide a large, diverse subject pool, enable randomized controlled trials, and can achieve both internal and external validity comparable to or even superior to traditional methods, while reducing costs and time. (Horton, Rand, and Zeckhauser 2010)

  • Utilise the concept of w-quasi-supermodularity’ within a ‘prelattice’ structure to ensure the pathwise monotonicity of solutions in constrained optimisation problems, even in situations involving nonlinear constraints.’ (Shirai 2010)

  • Focus on the monotonic behavior of the Argmaximum as a function of the two variables: a preference relation, and a subset on which it is maximized, using the pre-order on the space of all preference relations and the Veinott order on the power set of the universal lattice to establish the monotonicity theorem for quasi-supermodular preference relations and sublattices. (Neyapti 2010)

  • Carefully consider and control for potential confounding factors when conducting behavioral experiments, including ensuring anonymity, providing clear instructions, and collecting comprehensive economic and demographic data. (Henrich et al. 2006)

  • Consider using Bayesian approaches to address potential misspecifications and identification problems in structural empirical modeling, particularly when working with complex models such as DSGE models. (Lubik and Schorfheide 2005)

  • Utilise the concept of w-quasi-supermodularity’ within a ‘prelattice’ structure to ensure the pathwise monotonicity of solution sets in constrained optimisation problems, even in situations involving nonlinear constraints.’ (“Knowledge-Based Intelligent Information and Engineering Systems” 2004)

  • Conduct cross-cultural studies using diverse samples from various societies to explore the influence of economic and social factors on human behavior, particularly in relation to cooperation, sharing, and punishment. (Henrich et al. 2001)

  • Consider relaxing the monotonicity assumption in payoff functions, as doing so allows for a broader range of applications and more accurate predictions in various fields including economics, finance, and decision theory. (Hau 2001)

  • Utilise a combination of lattice-theoretic methods and differential techniques to establish strict monotonicity in comparative statics, thereby extending the previously established order-theoretic conclusions. (Edlin and Shannon 1998)

  • Recognize firms as “interactors” in the evolutionary process, serving as vehicles for habits and routines that function as “replicators”, thereby contributing to the development of a multiple-level evolutionary theory that incorporates various socio-economic levels. (NA?)

  • Adopt a common nomenclature for describing stated preference elicitation approaches to reduce confusion and increase clarity in communicating research findings. (NA?)

  • Consider utilizing online labor markets for conducting experiments due to your ability to provide quick, affordable, and diverse subject pools, while maintaining internal and external validity comparable to traditional laboratory and field experiments. (NA?)

  • Carefully evaluate the trade-offs between convenience and data quality when choosing between Amazon Mechanical Turk (MTurk) and online research panels for participant recruitment, considering factors such as participant diversity, naivety, and the need for additional screening measures. (NA?)

  • Focus on the monotonic behavior of the Argmaximum as a function of the two variables: a preference relation, and a subset on which it is maximized, using the pre-order on the space of all preference relations and the Veinott order on the power set of the universal lattice to establish the monotonicity theorem for quasi-supermodular preference relations and sublattices. (NA?)

Applications In Epidemiology

  • Use a Marginal Structural Model (MSM) with inverse probability weighting (IPW) approach when analyzing the impact of vaccination on COVID-19 disease severity and need for Intensive Care Unit (ICU) admission among hospitalized patients. (Belayneh et al. 2023)

  • Adopt a flexible continuum approach to defining experimental Acute Lung Injury (ALI), allowing for varying degrees of focus on different domains of lung injury, while ensuring that at least three out of four domains are documented to qualify as “experimental ALI”. (Kulkarni et al. 2022)

  • Ensure proper randomization and stratification procedures to achieve balance in your control and treatment groups, while also considering potential confounding factors like age and gender in your analysis. (Abaluck et al. 2022)

  • Consider using multiple methods, including observational studies and randomized controlled trials, to investigate the effectiveness of COVID-19 vaccines in reducing asymptomatic viral carriage and transmission. (Abbas et al. 2021)

  • Utilize Mendelian Randomization techniques to establish causal relationships between genetically determined levels of High-Density Lipoprotein Cholesterol (HDL-C) and the risk of hospitalization for infectious diseases. (Trinder et al. 2020)

  • Utilize a two-step process when studying the causal impact of epidemic-induced lockdowns on health and macroeconomic outcomes. First, they should estimate an epidemiological model with time-varying parameters to capture changes in individual behavior and shifts in transmission and clinical outcomes. Then, they should use the output of this model to estimate Structural Vector Autoregression (SVAR) and Local Projection (LP) models to quant (Plagborg-Møller and Wolf 2020)

  • Consider incorporating the interconnectedness of peoples social networks into your epidemiological models, as this can lead to more accurate predictions of disease spread.’ (Wu and McGoogan 2020)

  • Clearly distinguish between risk prediction studies, which aim to identify individuals at higher risk of developing diseases through statistical models, and explanatory studies, which seek to understand the underlying causal mechanisms behind disease development. This distinction is essential for accurate interpretation and application of results, reducing confusion and improving the efficiency of biomedical research. (Schooling and Jones 2018)

  • Avoid conditioning on a non-collider variable, as doing so might introduce selection bias, especially when the exposure has a non-null effect on the outcome. (Hernán 2017)

  • Adopt a comprehensive yet non-binding approach to registration, which includes specifying all aspects of the analysis plan, to reduce the likelihood of fishing and improve transparency in scientific research. (Humphreys, Sierra, and Windt 2013)

  • Consider the importance of testing the effectiveness of different types of risk information (avoidance vs reduction) on teenage sexual behaviour, particularly in areas with high HIV prevalence, using randomised field experiments to ensure robust identification of the impact of each type of information. (Dupas 2011)

  • Carefully consider the purpose of your cross-sectional study before selecting a multivariate model, as the choice depends on whether they seek to estimate the magnitude of a condition in a population or infer causal relationships. (Reichenheim and Coutinho 2010)

  • Utilize randomization inference based on Fishers exact test to accurately capture the causal effect of being listed on the first ballot page in the 2003 California gubernatorial recall election, given the unique randomization procedure employed by California law.’ (Daniel E. Ho and Imai 2006)

  • Aim to achieve a clear definition of the causal effect of interest in order to avoid the pitfalls of untestable predictions and ill-defined counterfactuals, particularly when working with observational data. (Hernán 2005)

  • Employ a 3-way fixed-effects analysis when examining the impact of new drugs on years of life lost (YLL) across various diseases and countries, controlling for the average decline in YLL rate within each country and disease. (Hausman 2001)

  • Carefully consider and address potential sources of bias, such as information bias and selection bias, in non-experimental studies, especially when evaluating small relative risk increments, as even minor biases can significantly impact the interpretation of results. (“The Racial Record of Johns Hopkins University” 1999)

  • Adopt an explanation-oriented, experimental approach to studying expertise development, focusing on testing specific hypotheses about the nature of cognitive structures responsible for expert performance and investigating apparent anomalies reported in the literature. (NA?)

  • Consider combining hypothesis-driven pathway-based approaches with agnostic’ Genome Wide Association Studies (GWAS) to effectively study gene-environment interactions and potentially identify novel genes acting synergistically with other factors.’ (NA?)

  • Develop a scoring system called “vigiGrade” to evaluate the completeness of individual case safety reports in pharmacovigilance databases, taking into consideration various dimensions such as time-to-onset, indication, outcome, sex, age, dose, country, primary reporter, report type, and comments, and assigning appropriate penalties for missing or incomplete information. (NA?)

  • Utilize the ROES (Reporting of Strategies in Systematic Evidence Syntheses) framework instead of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for systematic reviews and maps in the field of conservation and environmental management due to its tailoring to environmental systematic reviews, higher standards of reporting, clearer conduct standards, reduced emphasis on quantitative synthesis, accommodation of other types of synthesis, consistent and appropriate (NA?)

  • Utilize machine learning techniques to reduce bias in estimating life-years lost due to pollution exposure and to systematically quantify treatment effect heterogeneity. (NA?)

Applications In Social Sciences

  • Pay close attention to potential acquiescence bias when conducting surveys on political beliefs, as it can significantly inflate estimates of conspiratorial beliefs and political misperceptions, especially among more ideologically inclined respondents. (Hill and Roberts 2023)

  • Consider employing an instrumental variable (IV) methodology to establish causality between social media usage and trust in the European Union, while controlling for potential biases like endogeneity and omitted variable bias. (Moland and Michailidou 2023)

  • Avoid using the replication rate as a measure for detecting selective publication, since it is insensitive to the degree of selective publication on insignificant results and is affected by issues with common power calculations in replication studies. (Vu 2022)

  • Avoid making simplistic assumptions about the separation of race from other variables in regression models, and instead utilise advanced causal mediation techniques to better capture the complexity of racialised health disparities. (Graetz, Boen, and Esposito 2022)

  • Consider utilizing natural experiments, such as a surge in stop and search operations following a high-profile crime, to study the causal relationship between police interventions and crime reduction, while controlling for potential confounding factors. (Braakmann 2022)

  • Avoid p-hacking and publication bias by carefully selecting and reporting your statistical models, ensuring adequate statistical power, and considering pre-registration to enhance transparency and credibility. (Brodeur, Cook, and Heyes 2022)

  • Avoid overstating the strength of your findings based solely on statistical significance, especially when dealing with complex datasets and potential confounding factors. (Gelman 2022)

  • Employ nonparametric sensitivity analysis to evaluate the robustness of empirical evidence for the democratic peace hypothesis, as it allows for direct examination of the influence of unobserved confounders without making specific assumptions about the regression model. (Imai and Lo 2021)

  • Utilize an instrumental variable approach when studying the effect of mental health on social capital to mitigate potential issues arising from reciprocal causation. (Lebenbaum, Laporte, and Oliveira 2021)

  • Consider the potential benefits and pitfalls of apologies in business settings, taking into account factors like the cost of the apology, the severity of the issue, and the stage of the customer-business relationship. (Halperin et al. 2021)

  • Carefully consider potential confounds in experimental designs, especially when studying politically motivated reasoning, as traditional paradigms like Outcome Switching and Party Cues often violate the excludability assumption, leading to biased causal inferences. (Tappin, Pennycook, and Rand 2020)

  • Carefully distinguish between armed and unarmed suspects when examining racial disparities in police use-of-force, and apply appropriate crime rate adjustment benchmarks to each subpopulation to avoid introducing bias. (Ross, Winterhalder, and McElreath 2020)

  • Consider using list experiments to accurately measure non-compliance with health guidelines, particularly in situations where social desirability bias may lead to overreporting of compliant behavior. (Andersen 2020)

  • Employ empirical validation techniques to verify the accuracy of simulation methods used in legislative redistricting simulations, ensuring that the generated samples accurately represent the entire range of potential redistricting plans. (Fifield et al. 2020)

  • Carefully consider the potential impact of publication bias, social desirability bias, and hypothetical bias on your experimental results, especially when comparing field and survey experiments. (INCERTI 2020)

  • Utilize large-scale randomized experiments and natural experiments whenever feasible to assess the electoral effects of programmatic policies, as demonstrated by the authors examination of Seguro Popular de Salud and Progresa.’ (Imai, King, and Rivera 2019)

  • Utilize the DICEU approach to study Council politics, leveraging public videos of Council deliberations as a novel data source, due to its demonstrated face, convergent, and predictive validity. (Wratil and Hobolt 2019)

  • Consider applying the Synthetic Control Method to assess the impact of population control policies, comparing the treated population to a constructed synthetic control population that shares similar features during pre-intervention periods. (Gietel-Basten, Han, and Cheng 2019)

  • Avoid drawing conclusions about subgroup differences in preferences solely based on differences in conditional AMCEs, as these can be misleading due to the influence of the reference category used in regression analysis. (Leeper, Hobolt, and Tilley 2019)

  • Utilize channel positions as an instrument for exposure to media bias, demonstrating that a one-standard-deviation decrease in Fox Newss channel position is associated with an increase of approximately 2.5 minutes per week in time spent watching Fox News, leading to a 0.3 percentage point increase in the Republican presidential candidate’s vote share among affected viewers.’ (Martin and Yurukoglu 2017)

  • Utilise a disaggregated approach to data collection, focusing on village-level violence predictions over variable spatial and temporal windows, and incorporate contextual information alongside prior violence to enhance predictive accuracy. (Hirose, Imai, and Lyall 2017)

  • Consider the potential influence of historical context and changing societal expectations on the relationship between natural disasters and voter responses to relief efforts. (Heersink, Peterson, and Jenkins 2017)

  • Carefully assess and address the possibility of information equivalence (IE) violations in survey experiments, as failing to do so can lead to biased estimates of the causal effect of interest. (Bansak, Hainmueller, and Yamamoto 2017)

  • Use the Synthetic Control Method (SCM) instead of traditional approaches to create optimal control groups for case-studies, reducing bias and improving accuracy in estimating the impact of specific events on labor market outcomes. (Autor, Manning, and Smith 2016)

  • Consider the “indirect lobbying” channel, whereby firms attempt to gain favor with politicians by directing business towards firms controlled by those politicians, as a distinct mechanism of conflict of interest alongside the more widely studied “direct lobbying” and “businessman-politician” channels. (DellaVigna et al. 2016)

  • Conduct validation studies to compare the accuracy of various survey methodologies for measuring sensitive attitudes and behaviors, such as direct questioning, list experiments, endorsement experiments, and randomized response, against known ground truth data whenever possible. (Rosenfeld, Imai, and Shapiro 2015)

  • Utilise the randomized response technique to increase the accuracy of data collection in surveys concerning sensitive subjects, where participants might otherwise be reluctant to disclose accurate information due to fear of judgement or retribution. (Blair, Imai, and Zhou 2015)

  • Utilise the maximum likelihood (ML) estimator instead of the simpler two-step estimator when incorporating predicted responses from list experiments into regression models. (Imai, Park, and Greene 2015)

  • Avoid testing null hypotheses in social sciences since a nil effect size rarely occurs, and instead focus on estimating the magnitude of effects and drawing appropriate inferences based on the specific context and available data. (Lokshin 2015)

  • Consider using a strategic probit with partial observability (SPPO) estimator when dealing with outcome-specific data that lacks information on individual player decisions, as it enables better handling of strategic interactions and reduces bias compared to traditional and split-sample binary choice models. (Nieman 2015)

  • Validate your measurements of sensitive concepts through multiple survey instruments, such as list and endorsement experiments, and utilise statistical tests and multivariate regression models to compare and combine the results, thereby improving the accuracy and credibility of your findings. (Blair, Imai, and Lyall 2014)

  • Utilise a difference-in-differences approach when examining the impact of recentralisation on public services, using carefully chosen control and treatment groups to minimise potential confounding factors. (MALESKY, NGUYEN, and TRAN 2014)

  • Carefully consider the specific dimension of transparency they aim to study, develop precise measures for that dimension, and control for alternative information transmission mechanisms to ensure accurate identification of the relationship between transparency and accountability. (Hollyer, Rosendorff, and Vreeland 2014)

  • Combine design-based inference with process tracing to improve your ability to build and test theories about civil war onset and dynamics, utilizing counterfactual observations, elaborate theory, and qualitative evidence on treatment assignment to facilitate drawing causal inferences. (Lyall 2014)

  • Recognize the multiple dimensions of uncertainty, including perceptual expectancy, surprise, and subjective probability, and carefully consider how these factors might impact your experimental designs and results. (Sloman 2014)

  • Avoid overestimating the frequency of substate conflict contagion by ensuring that your definition of contagion includes a requirement for a causal link between the first conflict and the subsequent conflict onset. (Black 2013)

  • Carefully consider the potential impact of age and cultural background on childrens prosocial behaviour, particularly in contexts involving varying degrees of personal cost, and utilize rigorous statistical modelling techniques to account for these factors.’ (House et al. 2013)

  • Consider the role of observational learning in shaping consumer behaviour, especially in markets with a large number of products and limited consumer knowledge, as it can lead to herd behaviour and influence the demand for search goods. (Hendricks, Sorensen, and Wiseman 2012)

  • Carefully evaluate the internal and external validity of your chosen subject pool, considering factors like demographics, participation frequency, and attention levels, before drawing conclusions from your experiments. (Berinsky, Huber, and Lenz 2012)

  • Consider employing a “stepped wedge experimental design” when conducting a large-scale evaluation of a public policy like Seguro Popular, which allows for randomization within the phased rollout of the national program, making it politically feasible and ethical. (Wirtz et al. 2012)

  • Leverage advanced data processing techniques, such as pre-processing and supplementing government records with commercial data, to achieve more accurate survey validation and reduce misreporting biases in political surveys. (Ansolabehere and Hersh 2012)

  • Carefully consider the impact of misclassification on your analyses, particularly when dealing with complex coding schemes like the CMP, and explore strategies to minimize its effects. (Mikhaylov, Laver, and Benoit 2012)

  • Use a difference-in-differences estimator to identify the causal effect of the one-child policy on sex ratio imbalance in China, taking advantage of the exogenous differential treatment between the Han and minorities under the policy. (Li, Yi, and Zhang 2011)

  • Leverage online labor markets to conduct experiments, as they provide a large, diverse, and accessible subject pool, enable randomized controlled trials, and facilitate causal inference through individual-specific payments and communication restrictions. (Horton, Rand, and Zeckhauser 2010)

  • Change current practice and ask self-assessment questions immediately after the vignette battery, as doing so leads to a stronger relationship between the vignette-corrected responses and related independent variables. (Hopkins and King 2010)

  • Avoid conflating survey responses with philosophical intuitions, and instead recognize that survey responses are complex behaviors influenced by multiple factors beyond the targeted intuition, requiring careful consideration of survey methodology and context to accurately capture the intended intuition. (Cullen 2010)

  • Establish measurement invariance through confirmatory factor analysis before attempting cross-national comparisons of constructs like nationalism and constructive patriotism. (Davidov 2009)

  • Carefully consider the assumptions of the twin method, such as equal environments and sampling, and ensure adequate statistical power to differentiate genetic and environmental influences on political behavior. (Medland and Hatemi 2009)

  • Distinguish between conformity and non-conformity by examining the disproportionality of individuals responses to the frequency of a behavior in a social group, as conformity leads to behavioral homogeneity within the group whereas non-conformity increases variation within groups.’ (EFFERSON et al. 2008)

  • Utilise randomised natural experiments whenever feasible, as they offer valuable opportunities to make reliable causal inferences in real-world contexts. (Daniel E. Ho and Imai 2008)

  • Employ multiple models to analyze your data, comparing them using the Akaike Information Criterion (AIC) to select the model that best balances goodness of fit with parsimony. (Efferson et al. 2007)

  • Carefully evaluate and select anchoring vignettes to effectively correct for response-category differential item functioning (DIF) in survey research, thereby improving the validity and cross-cultural comparability of measurements. (King and Wand 2007)

  • Consider utilizing experimental techniques to enhance causal inference and guide theoretical development, recognizing the limitations of current applications and seeking opportunities to expand your use across broader areas of political science. (DRUCKMAN et al. 2006)

  • Evaluate set-theoretic relationships, particularly those involving fuzzy sets, using the proposed measures of consistency’ and ‘coverage’, which respectively assess the degree to which a subset relation has been approximated and the empirical relevance of a consistent subset.’ (Ragin 2006)

  • Consider conducting cross-cultural studies using multiple experimental games to explore variations in human behavior across diverse societies, taking into account factors like culture, economic systems, and social structures. (Henrich et al. 2005)

  • Exercise caution in specifying regression models, distinguishing between complementary and competing explanatory variables, and ensuring that theoretical relationships hold both spatially and temporally. (Oneal and Russett 2005)

  • Consider multiple approaches, including Bayesian model averaging, to ensure robustness and minimize potential bias in analyzing complex datasets like the one involving the 2000 US Presidential Election. (Imai and King 2004)

  • Conduct cross-cultural studies using multiple experimental games in various societal settings to understand the influence of economic, cultural, and social factors on human behavior. (Henrich et al. 2001)

  • Carefully consider the inclusion of fixed effects and the expansion of the time frame in your analyses to ensure accurate representation of the relationship between democracy, economic interdependence, and peace. (Oneal and Russett 2001)

  • Use Bayesian simulation to incorporate prior beliefs about the dimensions underlying the proposal space in roll call analysis, enabling them to better understand the substantive content of the recovered dimensions and improve model checking. (Jackman 2001)

  • Consider the potential impact of social contagion on memory, particularly when studying false memories, and utilize methods that account for this phenomenon, such as the collaborative recall test employed in the study. (Roediger, Meade, and Bergman 2001)

  • Avoid selection bias in your study design, particularly when using case-control designs, and ensure proper prior correction to obtain accurate predictions and causal inferences. (King and Zeng 2001)

  • Avoid collapsing multi-party electoral systems into a pseudo-two-party contest, as doing so leads to bias and information loss. Instead, they should develop multiparty statistical models tailored specifically to the unique features of these systems. (Katz and King 1999)

  • Prioritize developing predictive models for political conflicts, as doing so allows for continuous improvement through cross-validation, enhances the accessibility of research findings to policymakers and the general public, and ultimately leads to better theory building and explanations. (Kaye 1997)

  • Aim to maximize leverage - explaining as much as possible with as little as possible - while minimizing bias and reporting estimates of uncertainty. (King, Keohane, and Verba 1995)

  • Consider the role of conceptual metaphors in shaping human thinking and language, as these metaphors provide a framework for understanding complex phenomena through analogies to simpler, more familiar experiences. (Lakoff 1993)

  • Consider the impact of language familiarity on voice identification tasks, as it significantly influences the accuracy of recognizing voices. (Goggin et al. 1991)

  • Avoid using biased or inconsistent methods such as sophomore surge and retirement slump when estimating incumbency advantage in congressional elections, and instead opt for an unbiased estimator based on a simple linear regression model. (Gelman and King 1990)

  • Differentiate between decoding-level (DL) matches and comprehension-level (CL) matches when comparing the performance of dyslexic children to that of garden-variety poor readers or younger reading-level controls, as this distinction impacts the interpretation of results and informs the validity of conclusions drawn. (Stanovich 1988)

  • Consider the potential impact of context on analogical transfer, as context can play a significant role in facilitating the retrieval of relevant problem-solving schemas. (Spencer and Weisberg 1986)

  • Utilize standardized sentence-completion norms to investigate the effects of sentence contexts on word processing, thereby improving comparisons across experiments. (NA?)

  • Carefully consider the nature of interactions between different auditory dimensions (such as timbre, pitch, and loudness) when conducting experiments, as these interactions can impact the validity and reliability of findings. (NA?)

  • Carefully control for the potential impact of content effects on human reasoning performance, especially when studying conditional reasoning involving causal relationships, as the number of alternative causes and disabling conditions can significantly affect the acceptability of argument conclusions. (NA?)

  • Manipulate the locations of chunks in your study designs to understand the nature of chunking in chess perception. (NA?)

  • Carefully consider the impact of gender priming on lexical access, particularly in richly inflected languages like Italian, and utilize multiple methods such as word repetition, gender monitoring, and grammaticality judgement to better understand the underlying cognitive processes involved. (NA?)

  • Carefully consider the impact of gender priming on lexical access, particularly in richly inflected languages like Italian, and utilize multiple methods to disentangle the relative contributions of facilitation and inhibition. (NA?)

  • Carefully consider the impact of gender priming on lexical access, particularly in richly inflected languages like Italian, and utilize multiple methods such as word repetition, gender monitoring, and grammaticality judgement to better understand the underlying cognitive mechanisms involved. (NA?)

  • Carefully control and manipulate the position of the target in visual search tasks to investigate the impact of position priming on attentional deployment and response times. (NA?)

  • Consider the unique characteristics of individual word types (abstract, concrete, and emotion) rather than combining them, as doing so could potentially mask important differences and affect the validity of conclusions drawn about concreteness effects and word type differences. (NA?)

  • Carefully define and measure response variables, use objective criteria for reinforcing desired behaviors, and ensure that reinforcement contingencies are precisely specified to effectively manipulate and observe the impact of reinforcement on behavioral variability. (NA?)

  • Carefully consider the choice of experimental paradigm and theoretical framework when studying sensorimotor synchronization (SMS), as different approaches may be better suited to explaining specific aspects of this complex phenomenon. (NA?)

  • Carefully distinguish between two types of visual perspective taking (VPT): one that updates the viewers imagined perspective, and one that traces a line of sight, as these involve different computational processes and lead to varying response times based on the angle between the participant and the agent, and the distance between the agent and the object.’ (NA?)

  • Consider adopting a Bayesian perspective when studying covariation assessment, as it provides a more comprehensive understanding of participants behavior by taking into account your prior beliefs and the rarity of events.’ (NA?)

  • Consider adopting a participatory sense-making approach to studying social cognition, focusing on the dynamic coordination and interaction processes between individuals, rather than solely on individual cognitive mechanisms. (NA?)

  • Focus on identifying and analyzing evidence-based kernels’, defined as indivisible procedures shown through experimental evaluation to produce reliable effects on behaviour, as a way to develop efficient and effective interventions across various domains.’ (NA?)

  • Carefully consider the multimodal nature of object properties when selecting stimuli for experiments investigating modality-specific conceptual processing, rather than relying solely on assumptions about unimodal representations. (NA?)

  • Carefully manipulate and measure individual differences in inhibitory control to understand its potential impact on adults occasional failure to use perspective information to inhibit perspective-inappropriate interpretations during online language processing.’ (NA?)

  • Carefully distinguish between divergence and polarization when studying public opinion dynamics, as Bayesian models can accommodate divergence but not polarization. (NA?)

  • Consider incorporating multiple perspectives from various scientific fields, such as psychology, neuroscience, machine learning, and education, to develop a comprehensive understanding of human learning and cognition. (NA?)

  • Consider employing Extreme Bound Analysis (EBA) to analyze the robustness of previous conflicting findings regarding the effects of institutional arrangements on coalition formation, especially when dealing with country-level constants. (NA?)

  • Consider employing a split-population duration model to study irregular leadership changes, which helps distinguish between stable and unstable countries and evaluate the hazard of an ILC more accurately. (NA?)

  • Avoid confusing exploratory (hypothesis generation) with confirmatory (hypothesis testing) modes of data analysis, as doing so increases the likelihood of false positive findings. (NA?)

  • Prioritise causal identification and design-based inference methods, particularly leveraging experiments or natural experiments, to ensure robust causal inferences in quantitative political science. (NA?)

  • Consider utilizing internet surveys for political research, particularly when exploring relationships among variables, given the increasing difficulties associated with traditional telephone surveys and the improving representativeness of internet user populations. (NA?)

  • Utilize a multirater item response model to effectively combine subjective ratings based on scholarly and journalistic expertise with objective data on agency characteristics, thereby providing a principled structure for estimating agency preferences. (NA?)

  • Leverage advanced data processing techniques and commercial data providers to enhance the reliability and accuracy of survey response validation, especially in areas where traditional methods have proven inadequate. (NA?)

  • Conduct coding experiments to assess the reliability of human input processes in content analysis projects like the Comparative Manifesto Project (CMP), as misclassifications due to inconsistent application of coding schemes can lead to significant biases in the estimation of policy positions. (NA?)

  • Utilize experimental methods to investigate the impact of party cues on political attitudes and candidate preferences, particularly focusing on the moderating effect of exposure to policy-relevant information. (NA?)

  • Employ a mixed-method approach, combining vector autoregression (VAR) with Granger causality tests, to examine the causal dynamics between media coverage and public support for political parties, while considering the impact of elections on both variables. (NA?)

  • Carefully consider and investigate the heterogeneity of treatment effects across different contexts and populations, as this can help identify the underlying mechanisms driving the effectiveness of interventions and inform the scaling up of successful programs. (NA?)

  • Align your chosen phenomenological approach (transcendental vs. hermeneutic) with your underlying philosophical assumptions regarding the nature of human experience and knowledge. (NA?)

  • Carefully consider the potential impact of publication bias, social desirability bias, and hypothetical bias on your experimental results, especially when comparing field and survey experiments. (NA?)

References

n.d. https://doi.org/10.1371/journal.pmed.0050201.t001.
Abadie, Alberto. 2000. “Semiparametric Estimation of Instrumental Variable Models for Causal Effects,” September. https://doi.org/10.3386/t0260.
———. 2021. “Using Synthetic Controls: Feasibility, Data Requirements, and Methodological Aspects.” Journal of Economic Literature 59 (June). https://doi.org/10.1257/jel.20191450.
Abaluck, Jason, Laura H. Kwong, Ashley Styczynski, Ashraful Haque, Md. Alamgir Kabir, Ellen Bates-Jefferys, Emily Crawford, et al. 2022. “Impact of Community Masking on COVID-19: A Cluster-Randomized Trial in Bangladesh.” Science 375 (January). https://doi.org/10.1126/science.abi9069.
Abbas, Mohamed, Tomás Robalo Nunes, Romain Martischang, Walter Zingg, Anne Iten, Didier Pittet, and Stephan Harbarth. 2021. “Nosocomial Transmission and Outbreaks of Coronavirus Disease 2019: The Need to Protect Both Patients and Healthcare Workers.” Antimicrobial Resistance &Amp; Infection Control 10 (January). https://doi.org/10.1186/s13756-020-00875-7.
Abdellaoui, Mohammed, Aurélien Baillon, Laetitia Placido, and Peter P Wakker. 2011. “The Rich Domain of Uncertainty: Source Functions and Their Experimental Implementation.” American Economic Review 101 (April). https://doi.org/10.1257/aer.101.2.695.
ACHARYA, AVIDIT, MATTHEW BLACKWELL, and MAYA SEN. 2016. “Explaining Causal Findings Without Bias: Detecting and Assessing Direct Effects.” American Political Science Review 110 (August). https://doi.org/10.1017/s0003055416000216.
Acharya, Avidit, Matthew Blackwell, and Maya Sen. 2018. “Analyzing Causal Mechanisms in Survey Experiments.” Political Analysis 26 (August). https://doi.org/10.1017/pan.2018.19.
Adhikari, Bibek, and James Alm. 2016. “Evaluating the Economic Effects of Flat Tax Reforms Using Synthetic Control Methods.” Southern Economic Journal 83 (August). https://doi.org/10.1002/soej.12152.
Adragna, Robert, Elliot Creager, David Madras, and Richard Zemel. 2020. “Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification.” arXiv. https://doi.org/10.48550/ARXIV.2011.06485.
“Advances in Knowledge Discovery and Data Mining.” 2002. Lecture Notes in Computer Science. https://doi.org/10.1007/3-540-47887-6.
Alatas, Vivi, Abhijit Banerjee, Rema Hanna, Benjamin A Olken, and Julia Tobias. 2012. “Targeting the Poor: Evidence from a Field Experiment in Indonesia.” American Economic Review 102 (June). https://doi.org/10.1257/aer.102.4.1206.
Altonji, Joseph G., Ching-I Huang, and Christopher R. Taber. 2015. “Estimating the Cream Skimming Effect of School Choice.” Journal of Political Economy 123 (April). https://doi.org/10.1086/679497.
Andersen, Martin. 2020. “Early Evidence on Social Distancing in Response to COVID-19 in the United States.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3569368.
Andreas, Holger, and Mario Günther. 2024. “A Regularity Theory of Causation.” Pacific Philosophical Quarterly, January. https://doi.org/10.1111/papq.12447.
Angrist, Joshua, and Michal Kolesár. 2021. “One Instrument to Rule Them All: The Bias and Coverage of Just-ID IV.” arXiv. https://doi.org/10.48550/ARXIV.2110.10556.
Ankel-Peters, Jörg, and Christoph M. Schmidt. 2023. “Rural Electrification, the Credibility Revolution, and the Limits of Evidence-Based Policy.” https://doi.org/10.4419/96973220.
Ansolabehere, Stephen, and Eitan Hersh. 2012. “Validation: What Big Data Reveal about Survey Misreporting and the Real Electorate.” Political Analysis 20. https://doi.org/10.1093/pan/mps023.
Ante, Lennart. 2021. “Smart Contracts on the Blockchain – a Bibliometric Analysis and Review.” Telematics and Informatics 57 (March). https://doi.org/10.1016/j.tele.2020.101519.
Antinyan, Armenak, and Zareh Asatryan. 2019. “Nudging for Tax Compliance: A Meta-Analysis.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3500744.
Arai, Yoichi, Taisuke Otsu, and Myung Hwan Seo. 2021. “Regression Discontinuity Design with Potentially Many Covariates.” arXiv. https://doi.org/10.48550/ARXIV.2109.08351.
Arkhangelsky, Dmitry, and David Hirshberg. 2023. “Large-Sample Properties of the Synthetic Control Method Under Selection on Unobservables.” arXiv. https://doi.org/10.48550/ARXIV.2311.13575.
Arkhangelsky, Dmitry, and Guido W. Imbens. 2019. “Doubly Robust Identification for Causal Panel Data Models.” arXiv. https://doi.org/10.48550/ARXIV.1909.09412.
Armitage, P., C. K. McPherson, and B. C. Rowe. 1969. “Repeated Significance Tests on Accumulating Data.” Journal of the Royal Statistical Society. Series A (General) 132. https://doi.org/10.2307/2343787.
Aronow, Peter M., Jonathon Baron, and Lauren Pinson. 2019. “A Note on Dropping Experimental Subjects Who Fail a Manipulation Check.” Political Analysis 27 (May). https://doi.org/10.1017/pan.2019.5.
Ashby, F. Gregory, Sarah Queller, and Patricia M. Berretty. 1999. “On the Dominance of Unidimensional Rules in Unsupervised Categorization.” Perception &Amp; Psychophysics 61 (August). https://doi.org/10.3758/bf03207622.
Athey, Susan, and Guido Imbens. 2016. “The Econometrics of Randomized Experiments,” July. http://arxiv.org/abs/1607.00698v1.
Auerbach, Alan J, and Yuriy Gorodnichenko. 2012. “Measuring the Output Responses to Fiscal Policy.” American Economic Journal: Economic Policy 4 (May). https://doi.org/10.1257/pol.4.2.1.
Autor, David H., Alan Manning, and Christopher L. Smith. 2016. “The Contribution of the Minimum Wage to US Wage Inequality over Three Decades: A Reassessment.” American Economic Journal: Applied Economics 8 (January). https://doi.org/10.1257/app.20140073.
Bang, Heejung, and James M. Robins. 2005. “Doubly Robust Estimation in Missing Data and Causal Inference Models.” Biometrics 61 (December). https://doi.org/10.1111/j.1541-0420.2005.00377.x.
Bansak, Kirk, Jens Hainmueller, and Teppei Yamamoto. 2017. “Beyond the Breaking Point? Survey Satisficing in Conjoint Experiments.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2959146.
Bareinboim, Elias, and Judea Pearl. 2016. “Causal Inference and the Data-Fusion Problem.” Proceedings of the National Academy of Sciences 113 (July). https://doi.org/10.1073/pnas.1510507113.
Battistin, Erich, and Enrico Rettore. 2008. “Ineligibles and Eligible Non-Participants as a Double Comparison Group in Regression-Discontinuity Designs.” Journal of Econometrics 142 (February). https://doi.org/10.1016/j.jeconom.2007.05.006.
Beck, Nathaniel. 2010. “Causal Process ‘Observation’: Oxymoron or (Fine) Old Wine.” Political Analysis 18. https://doi.org/10.1093/pan/mpq023.
Belayneh, Eskedar Kebede, Tigist Workneh Leulseged, Blen Solomon Teklu, Bersabel Hilawi Tewodros, Muluken Zeleke Megiso, Edengenet Solomon Weldesenbet, Mefthe Fikru Berhanu, Yohannes Shiferaw Shaweno, and Kirubel Tesfaye Hailu. 2023. “A Causal Inference of the Effect of Vaccination on COVID-19 Disease Severity and Need for Intensive Care Unit Admission Among Hospitalized Patients in an African Setting,” August. https://doi.org/10.1101/2023.08.22.23294414.
Bellemare, Marc F., Takaaki Masaki, and Thomas B. Pepinsky. 2017. “Lagged Explanatory Variables and the Estimation of Causal Effect.” The Journal of Politics 79 (July). https://doi.org/10.1086/690946.
Ben-Michael, Eli, Avi Feller, and Jesse Rothstein. 2021. “Synthetic Controls with Staggered Adoption.” Journal of the Royal Statistical Society Series B: Statistical Methodology 84 (December). https://doi.org/10.1111/rssb.12448.
Berinsky, Adam J., Gregory A. Huber, and Gabriel S. Lenz. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk.” Political Analysis 20. https://doi.org/10.1093/pan/mpr057.
Berry, Steven T, and Giovanni Compiani. 2022. “An Instrumental Variable Approach to Dynamic Models.” The Review of Economic Studies 90 (September). https://doi.org/10.1093/restud/rdac061.
Betz, Timm, Scott J. Cook, and Florian M. Hollenbach. 2019. “Spatial Interdependence and Instrumental Variable Models.” Political Science Research and Methods 8 (January). https://doi.org/10.1017/psrm.2018.61.
Bica, Ioana, Ahmed M. Alaa, Craig Lambert, and Mihaela van der Schaar. 2020. “From Real‐world Patient Data to Individualized Treatment Effects Using Machine Learning: Current and Future Methods to Address Underlying Challenges.” Clinical Pharmacology &Amp; Therapeutics 109 (June). https://doi.org/10.1002/cpt.1907.
Bidner, Chris, Guillaume Roger, and Jessica Moses. 2016. “Investing in Skill and Searching for Coworkers: Endogenous Participation in a Matching Market.” American Economic Journal: Microeconomics 8 (February). https://doi.org/10.1257/mic.20140110.
Black, Nathan. 2013. “When Have Violent Civil Conflicts Spread? Introducing a Dataset of Substate Conflict Contagion.” Journal of Peace Research 50 (August). https://doi.org/10.1177/0022343313493634.
BLACKWELL, MATTHEW, and ADAM N. GLYNN. 2018. “How to Make Causal Inferences with Time-Series Cross-Sectional Data Under Selection on Observables.” American Political Science Review 112 (August). https://doi.org/10.1017/s0003055418000357.
Blair, Graeme, and Kosuke Imai. 2012. “Statistical Analysis of List Experiments.” Political Analysis 20. https://doi.org/10.1093/pan/mpr048.
Blair, Graeme, Kosuke Imai, and Jason Lyall. 2014. “Comparing and Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58 (February). https://doi.org/10.1111/ajps.12086.
Blair, Graeme, Kosuke Imai, and Yang-Yang Zhou. 2015. “Design and Analysis of the Randomized Response Technique.” Journal of the American Statistical Association 110 (July). https://doi.org/10.1080/01621459.2015.1050028.
Bonander, Carl, Niklas Jakobsson, Federico Podestà, and Mikael Svensson. 2016. “Universities as Engines for Regional Growth? Using the Synthetic Control Method to Analyze the Effects of Research Universities.” Regional Science and Urban Economics 60 (September). https://doi.org/10.1016/j.regsciurbeco.2016.07.008.
Botosaru, Irene, and Bruno Ferman. 2019. “On the Role of Covariates in the Synthetic Control Method.” The Econometrics Journal 22 (January). https://doi.org/10.1093/ectj/utz001.
Bottmer, Lea, Guido W. Imbens, Jann Spiess, and Merrill Warnick. 2023. “A Design-Based Perspective on Synthetic Control Methods.” Journal of Business &Amp; Economic Statistics, August. https://doi.org/10.1080/07350015.2023.2238788.
Bouttell, Janet, Peter Craig, James Lewsey, Mark Robinson, and Frank Popham. 2018. “Synthetic Control Methodology as a Tool for Evaluating Population-Level Health Interventions.” Journal of Epidemiology and Community Health 72 (April). https://doi.org/10.1136/jech-2017-210106.
Braakmann, Nils. 2022. “Does Stop and Search Reduce Crime? Evidence from Street-Level Data and a Surge in Operations Following a High-Profile Crime.” Journal of the Royal Statistical Society Series A: Statistics in Society 185 (April). https://doi.org/10.1111/rssa.12839.
Branas, Charles C., Rose A. Cheney, John M. MacDonald, Vicky W. Tam, Tara D. Jackson, and Thomas R. Ten Have. 2011. “A Difference-in-Differences Analysis of Health, Safety, and Greening Vacant Urban Space.” American Journal of Epidemiology 174 (November). https://doi.org/10.1093/aje/kwr273.
Brodeur, Abel, Scott Carrell, David Figlio, and Lester Lusher. 2023. “Unpacking p-Hacking and Publication Bias.” American Economic Review 113 (November). https://doi.org/10.1257/aer.20210795.
Brodeur, Abel, Nikolai Cook, Jonathan Hartley, and Anthony Heyes. 2022. “Do Pre-Registration and Pre-Analysis Plans Reduce p-Hacking and Publication Bias?” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4180594.
Brodeur, Abel, Nikolai Cook, and Anthony Heyes. 2020. “Methods Matter: P-Hacking and Publication Bias in Causal Analysis in Economics.” American Economic Review 110 (November). https://doi.org/10.1257/aer.20190687.
———. 2022. “We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4188289.
Brodeur, Abel, Nikolai Cook, and Carina Neisser. 2024. “P-Hacking, Data Type and Data-Sharing Policy.” The Economic Journal, January. https://doi.org/10.1093/ej/uead104.
Brookhart, M. Alan, and Sebastian Schneeweiss. 2007. “Preference-Based Instrumental Variable Methods for the Estimation of Treatment Effects: Assessing Validity and Interpreting Results.” The International Journal of Biostatistics 3 (January). https://doi.org/10.2202/1557-4679.1072.
Burger, Rulof P., and Zoë M. McLaren. 2017. “An Econometric Method for Estimating Population Parameters from Non‐random Samples: An Application to Clinical Case Finding.” Health Economics 26 (August). https://doi.org/10.1002/hec.3547.
Butts, Kyle. 2021. “Geographic Difference-in-Discontinuities.” Applied Economics Letters 30 (November). https://doi.org/10.1080/13504851.2021.2005236.
Cai, Bing, Dylan S. Small, and Thomas R. Ten Have. 2011. “Two‐stage Instrumental Variable Methods for Estimating the Causal Odds Ratio: Analysis of Bias.” Statistics in Medicine 30 (April). https://doi.org/10.1002/sim.4241.
Callaway, Brantly, and Tong Li. 2020. “Evaluating Policies Early in a Pandemic: Bounding Policy Effects with Nonrandomly Missing Data.” arXiv. https://doi.org/10.48550/ARXIV.2005.09605.
Chamon, Marcos, Márcio Garcia, and Laura Souza. 2017. “FX Interventions in Brazil: A Synthetic Control Approach.” Journal of International Economics 108 (September). https://doi.org/10.1016/j.jinteco.2017.05.005.
Chan, A.-W. 2004. “Outcome Reporting Bias in Randomized Trials Funded by the Canadian Institutes of Health Research.” Canadian Medical Association Journal 171 (September). https://doi.org/10.1503/cmaj.1041086.
Christian, Paul, and Christopher B. Barrett. 2017. “Revisiting the Effect of Food Aid on Conflict: A Methodological Caution,” August. https://doi.org/10.1596/1813-9450-8171.
Cinelli, Carlos, and Chad Hazlett. 2019. “Making Sense of Sensitivity: Extending Omitted Variable Bias.” Journal of the Royal Statistical Society Series B: Statistical Methodology 82 (December). https://doi.org/10.1111/rssb.12348.
Cirone, Alexandra, and Thomas B. Pepinsky. 2022. “Historical Persistence.” Annual Review of Political Science 25 (May). https://doi.org/10.1146/annurev-polisci-051120-104325.
Cole, Matthew A., Robert J R Elliott, and Bowen Liu. 2020. “The Impact of the Wuhan Covid-19 Lockdown on Air Pollution and Health: A Machine Learning and Augmented Synthetic Control Approach.” Environmental and Resource Economics 76 (August). https://doi.org/10.1007/s10640-020-00483-4.
Collier, Paul, Anke Hoeffler, and Måns Söderbom. 2004. “On the Duration of Civil War.” Journal of Peace Research 41 (May). https://doi.org/10.1177/0022343304043769.
Courville, Aaron C., Nathaniel D. Daw, and David S. Touretzky. 2006. “Bayesian Theories of Conditioning in a Changing World.” Trends in Cognitive Sciences 10 (July). https://doi.org/10.1016/j.tics.2006.05.004.
Cui, Yifan, Hongming Pu, Xu Shi, Wang Miao, and Eric Tchetgen Tchetgen. 2020. “Semiparametric Proximal Causal Inference.” arXiv. https://doi.org/10.48550/ARXIV.2011.08411.
———. 2023. “Semiparametric Proximal Causal Inference.” Journal of the American Statistical Association, April. https://doi.org/10.1080/01621459.2023.2191817.
Cullen, Simon. 2010. “Survey-Driven Romanticism.” Review of Philosophy and Psychology 1 (January). https://doi.org/10.1007/s13164-009-0016-1.
D’Haultfœuille, Xavier, Ao Wang, Philippe Février, and Lionel Wilner. 2022. “Estimating the Gains (and Losses) of Revenue Management.” arXiv. https://doi.org/10.48550/ARXIV.2206.04424.
Davidov, Eldad. 2009. “Measurement Equivalence of Nationalism and Constructive Patriotism in the ISSP: 34 Countries in a Comparative Perspective.” Political Analysis 17. https://doi.org/10.1093/pan/mpn014.
Dawid, A. P. 2000. “Causal Inference Without Counterfactuals.” Journal of the American Statistical Association 95 (June). https://doi.org/10.1080/01621459.2000.10474210.
Deaton, Angus, and Nancy Cartwright. 2018. “Understanding and Misunderstanding Randomized Controlled Trials.” Social Science &Amp; Medicine 210 (August). https://doi.org/10.1016/j.socscimed.2017.12.005.
Decker, Christian, and Marco Ottaviani. 2023. “Preregistration and Credibility of Clinical Trials<sup>*</Sup>,” May. https://doi.org/10.1101/2023.05.22.23290326.
Deffner, Dominik, Julia M. Rohrer, and Richard McElreath. 2022. “A Causal Framework for Cross-Cultural Generalizability.” Advances in Methods and Practices in Psychological Science 5 (July). https://doi.org/10.1177/25152459221106366.
DellaVigna, Stefano, Ruben Durante, Brian Knight, and Eliana La Ferrara. 2016. “Market-Based Lobbying: Evidence from Advertising Spending in Italy.” American Economic Journal: Applied Economics 8 (January). https://doi.org/10.1257/app.20150042.
DRUCKMAN, JAMES N., DONALD P. GREEN, JAMES H. KUKLINSKI, and ARTHUR LUPIA. 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100 (November). https://doi.org/10.1017/s0003055406062514.
Duarte, Guilherme, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. 2021. “An Automated Approach to Causal Inference in Discrete Settings.” arXiv. https://doi.org/10.48550/ARXIV.2109.13471.
Dube, Arindrajit, and Ben Zipperer. 2015. “Pooling Multiple Case Studies Using Synthetic Controls: An Application to Minimum Wage Policies.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2589786.
Dupas, Pascaline. 2011. “Do Teenagers Respond to HIV Risk Information? Evidence from a Field Experiment in Kenya.” American Economic Journal: Applied Economics 3 (January). https://doi.org/10.1257/app.3.1.1.
duPont, William, and Ilan Noy. 2015. “What Happened to Kobe? A Reassessment of the Impact of the 1995 Earthquake in Japan.” Economic Development and Cultural Change 63 (July). https://doi.org/10.1086/681129.
“Editorial Statement on Negative Findings.” 2015. Health Economics 24 (March). https://doi.org/10.1002/hec.3172.
Edlin, Aaron S., and Chris Shannon. 1998. “Strict Monotonicity in Comparative Statics.” Journal of Economic Theory 81 (July). https://doi.org/10.1006/jeth.1998.2405.
Efferson, Charles, Peter J. Richerson, Richard McElreath, Mark Lubell, Ed Edsten, Timothy M. Waring, Brian Paciotti, and William Baum. 2007. “Learning, Productivity, and Noise: An Experimental Study of Cultural Transmission on the Bolivian Altiplano.” Evolution and Human Behavior 28 (January). https://doi.org/10.1016/j.evolhumbehav.2006.05.005.
EFFERSON, C, R LALIVE, P RICHERSON, R MCELREATH, and M LUBELL. 2008. “Conformists and Mavericks: The Empirics of Frequency-Dependent Cultural Transmission☆.” Evolution and Human Behavior 29 (January). https://doi.org/10.1016/j.evolhumbehav.2007.08.003.
Egami, Naoki, and Kosuke Imai. 2018. “Causal Interaction in Factorial Experiments: Application to Conjoint Analysis.” Journal of the American Statistical Association 114 (August). https://doi.org/10.1080/01621459.2018.1476246.
Feenstra, Robert Inklaar Robert C. 2013. “Penn World Table 8.0.” https://doi.org/10.15141/S5159X.
Fenton, Norman, and Martin Neil. 2018. “Risk Assessment and Decision Analysis with Bayesian Networks,” September. https://doi.org/10.1201/b21982.
Fifield, Benjamin, Kosuke Imai, Jun Kawahara, and Christopher T. Kenny. 2020. “The Essential Role of Empirical Validation in Legislative Redistricting Simulation.” Statistics and Public Policy 7 (January). https://doi.org/10.1080/2330443x.2020.1791773.
Finkelstein, Amy, and Nathaniel Hendren. 2020. “Welfare Analysis Meets Causal Inference.” Journal of Economic Perspectives 34 (November). https://doi.org/10.1257/jep.34.4.146.
Fithian, William, Dennis Sun, and Jonathan Taylor. 2014. “Optimal Inference After Model Selection.” arXiv. https://doi.org/10.48550/ARXIV.1410.2597.
Friendly, Michael, Patricia E. Franklin, David Hoffman, and David C. Rubin. 1982. “The Toronto Word Pool: Norms for Imagery, Concreteness, Orthographic Variables, and Grammatical Usage for 1,080 Words.” Behavior Research Methods &Amp; Instrumentation 14 (September). https://doi.org/10.3758/bf03203275.
Fu, Anqi, Balasubramanian Narasimhan, and Stephen Boyd. 2020. “<B>CVXR</b>: An <i>r</i> Package for Disciplined Convex Optimization.” Journal of Statistical Software 94. https://doi.org/10.18637/jss.v094.i14.
Galiani, Sebastian, and Brian Quistorff. 2017. “The Synth_runner Package: Utilities to Automate Synthetic Control Estimation Using Synth.” The Stata Journal: Promoting Communications on Statistics and Stata 17 (December). https://doi.org/10.1177/1536867x1801700404.
Gangl, Markus. 2010. “Causal Inference in Sociological Research.” Annual Review of Sociology 36 (June). https://doi.org/10.1146/annurev.soc.012809.102702.
Gebharter, Alexander. 2017. “Causal Nets, Interventionism, and Mechanisms.” https://doi.org/10.1007/978-3-319-49908-6.
Gelman, Andrew. 2022. “Criticism as Asynchronous Collaboration: An Example from Social Science Research.” Stat 11 (June). https://doi.org/10.1002/sta4.464.
Gelman, Andrew, and Gary King. 1990. “Estimating Incumbency Advantage Without Bias.” American Journal of Political Science 34 (November). https://doi.org/10.2307/2111475.
GERRING, JOHN. 2004. “What Is a Case Study and What Is It Good For?” American Political Science Review 98 (May). https://doi.org/10.1017/s0003055404001182.
Gertler, Mark, and Peter Karadi. 2015. “Monetary Policy Surprises, Credit Costs, and Economic Activity.” American Economic Journal: Macroeconomics 7 (January). https://doi.org/10.1257/mac.20130329.
Gietel-Basten, Stuart, Xuehui Han, and Yuan Cheng. 2019. “Assessing the Impact of the ‘One-Child Policy’ in China: A Synthetic Control Approach.” PLOS ONE 14 (November). https://doi.org/10.1371/journal.pone.0220170.
Girma, Sourafel, and Holger Görg. 2016. “Evaluating the Foreign Ownership Wage Premium Using a Difference-in-Differences Matching Approach.” World Scientific Studies in International Economics, July. https://doi.org/10.1142/9789814749237_0002.
Glass, Thomas A., Steven N. Goodman, Miguel A. Hernán, and Jonathan M. Samet. 2013. “Causal Inference in Public Health.” Annual Review of Public Health 34 (March). https://doi.org/10.1146/annurev-publhealth-031811-124606.
Gluud, Lise Lotte. 2006. “Bias in Clinical Intervention Research.” American Journal of Epidemiology 163 (January). https://doi.org/10.1093/aje/kwj069.
Goggin, Judith P., Charles P. Thompson, Gerhard Strube, and Liza R. Simental. 1991. “The Role of Language Familiarity in Voice Identification.” Memory &Amp; Cognition 19 (September). https://doi.org/10.3758/bf03199567.
Graetz, Nick, Courtney E. Boen, and Michael H. Esposito. 2022. “Structural Racism and Quantitative Causal Inference: A Life Course Mediation Framework for Decomposing Racial Health Disparities.” Journal of Health and Social Behavior 63 (January). https://doi.org/10.1177/00221465211066108.
Grier, Kevin, and Norman Maynard. 2016. “The Economic Consequences of Hugo Chavez: A Synthetic Control Analysis.” Journal of Economic Behavior &Amp; Organization 125 (May). https://doi.org/10.1016/j.jebo.2015.12.011.
Groenwold, Rolf H H, David B Nelson, Kristin L Nichol, Arno W Hoes, and Eelko Hak. 2009. “Sensitivity Analyses to Estimate the Potential Impact of Unmeasured Confounding in Causal Research.” International Journal of Epidemiology 39 (November). https://doi.org/10.1093/ije/dyp332.
Gründler, Klaus, and Tommy Krieger. 2022. “Should We Care (More) about Data Aggregation?” European Economic Review 142 (February). https://doi.org/10.1016/j.euroecorev.2021.104010.
Gunasekara, Fiona Imlach, Ken Richardson, Kristie Carter, and Tony Blakely. 2013. “Fixed Effects Analysis of Repeated Measures Data.” International Journal of Epidemiology 43 (December). https://doi.org/10.1093/ije/dyt221.
Hainmueller, Jens, Daniel J. Hopkins, and Teppei Yamamoto. 2014. “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments.” Political Analysis 22. https://doi.org/10.1093/pan/mpt024.
Halperin, Basil, Benjamin Ho, John A List, and Ian Muir. 2021. “Toward an Understanding of the Economics of Apologies: Evidence from a Large-Scale Natural Field Experiment.” The Economic Journal 132 (July). https://doi.org/10.1093/ej/ueab062.
“Handbook of Causal Analysis for Social Research.” 2013. Handbooks of Sociology and Social Research. https://doi.org/10.1007/978-94-007-6094-3.
Hartman, Erin, and F. Daniel Hidalgo. 2018. “An Equivalence Approach to Balance and Placebo Tests.” American Journal of Political Science 62 (September). https://doi.org/10.1111/ajps.12387.
Hau, Arthur. 2001. “A General Theorem on the Comparative Statics of Changes in Risk.” The Geneva Papers on Risk and Insurance Theory 26 (June). https://doi.org/10.1023/a:1011260207279.
Hausman, Jerry. 2001. “Mismeasured Variables in Econometric Analysis: Problems from the Right and Problems from the Left.” Journal of Economic Perspectives 15 (November). https://doi.org/10.1257/jep.15.4.57.
Heckman, James. 1999. “Causal Parameters and Policy Analysis in Economcs: A Twentieth Century Retrospective,” September. https://doi.org/10.3386/w7333.
Heersink, Boris, Brenton D. Peterson, and Jeffery A. Jenkins. 2017. “Disasters and Elections: Estimating the Net Effect of Damage and Relief in Historical Perspective.” Political Analysis 25 (April). https://doi.org/10.1017/pan.2017.7.
Hendricks, Kenneth, Alan Sorensen, and Thomas Wiseman. 2012. “Observational Learning and Demand for Search Goods.” American Economic Journal: Microeconomics 4 (February). https://doi.org/10.1257/mic.4.1.1.
Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, Herbert Gintis, and Richard McElreath. 2001. “In Search of Homo Economicus: Behavioral Experiments in 15 Small-Scale Societies.” American Economic Review 91 (May). https://doi.org/10.1257/aer.91.2.73.
Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, Herbert Gintis, Richard McElreath, et al. 2005. ‘Economic Man’ in Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies.” Behavioral and Brain Sciences 28 (December). https://doi.org/10.1017/s0140525x05000142.
Henrich, Joseph, Richard McElreath, Abigail Barr, Jean Ensminger, Clark Barrett, Alexander Bolyanatz, Juan Camilo Cardenas, et al. 2006. “Costly Punishment Across Human Societies.” Science 312 (June). https://doi.org/10.1126/science.1127333.
Hernán, Miguel A. 2005. “Invited Commentary: Hypothetical Interventions to Define Causal Effects—Afterthought or Prerequisite?” American Journal of Epidemiology 162 (October). https://doi.org/10.1093/aje/kwi255.
———. 2017. “Invited Commentary: Selection Bias Without Colliders.” American Journal of Epidemiology 185 (May). https://doi.org/10.1093/aje/kwx077.
Hill, Seth J., and Margaret E. Roberts. 2023. “Acquiescence Bias Inflates Estimates of Conspiratorial Beliefs and Political Misperceptions.” Political Analysis 31 (January). https://doi.org/10.1017/pan.2022.28.
Hirose, Kentaro, Kosuke Imai, and Jason Lyall. 2017. “Can Civilian Attitudes Predict Insurgent Violence? Ideology and Insurgent Tactical Choice in Civil War.” Journal of Peace Research 54 (January). https://doi.org/10.1177/0022343316675909.
Ho, Daniel E., and Kosuke Imai. 2008. “Estimating Causal Effects of Ballot Order from a Randomized Natural Experiment.” Public Opinion Quarterly 72. https://doi.org/10.1093/poq/nfn018.
Ho, Daniel E, and Kosuke Imai. 2006. “Randomization Inference with Natural Experiments.” Journal of the American Statistical Association 101 (September). https://doi.org/10.1198/016214505000001258.
Höfler, M. 2005. “Causal Inference Based on Counterfactuals.” BMC Medical Research Methodology 5 (September). https://doi.org/10.1186/1471-2288-5-28.
Hogan, Joseph W, and Tony Lancaster. 2004. “Instrumental Variables and Inverse Probability Weighting for Causal Inference from Longitudinal Observational Studies.” Statistical Methods in Medical Research 13 (February). https://doi.org/10.1191/0962280204sm351ra.
Hollyer, James R., B. Peter Rosendorff, and James Raymond Vreeland. 2014. “Measuring Transparency.” Political Analysis 22. https://doi.org/10.1093/pan/mpu001.
Hopkins, D. J., and G. King. 2010. “Improving Anchoring Vignettes: Designing Surveys to Correct Interpersonal Incomparability.” Public Opinion Quarterly 74 (March). https://doi.org/10.1093/poq/nfq011.
Horton, John J., David G. Rand, and Richard J. Zeckhauser. 2010. “The Online Laboratory: Conducting Experiments in a Real Labor Market.” arXiv. https://doi.org/10.48550/ARXIV.1004.2931.
House, Bailey R., Joan B. Silk, Joseph Henrich, H. Clark Barrett, Brooke A. Scelza, Adam H. Boyette, Barry S. Hewlett, Richard McElreath, and Stephen Laurence. 2013. “Ontogeny of Prosocial Behavior Across Diverse Societies.” Proceedings of the National Academy of Sciences 110 (August). https://doi.org/10.1073/pnas.1221217110.
Hudgens, Michael G, and M. Elizabeth Halloran. 2008. “Toward Causal Inference with Interference.” Journal of the American Statistical Association 103 (June). https://doi.org/10.1198/016214508000000292.
Humphreys, Macartan, Raul Sanchez de la Sierra, and Peter van der Windt. 2013. “Fishing, Commitment, and Communication: A Proposal for Comprehensive Nonbinding Research Registration.” Political Analysis 21. https://doi.org/10.1093/pan/mps021.
Iacus, Stefano M., Gary King, and Giuseppe Porro. 2011. “Multivariate Matching Methods That Are Monotonic Imbalance Bounding.” Journal of the American Statistical Association 106 (March). https://doi.org/10.1198/jasa.2011.tm09599.
Imai, Kosuke. 2008. “Statistical Analysis of Randomized Experiments with Non-Ignorable Missing Binary Outcomes: An Application to a Voting Experiment.” Journal of the Royal Statistical Society Series C: Applied Statistics 58 (December). https://doi.org/10.1111/j.1467-9876.2008.00637.x.
Imai, Kosuke, and David A van Dyk. 2004. “Causal Inference with General Treatment Regimes.” Journal of the American Statistical Association 99 (September). https://doi.org/10.1198/016214504000001187.
Imai, Kosuke, and Gary King. 2004. “Did Illegal Overseas Absentee Ballots Decide the 2000 u.s. Presidential Election?” Perspectives on Politics 2 (September). https://doi.org/10.1017/s1537592704040332.
Imai, Kosuke, Gary King, and Clayton Nall. 2009. “The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation.” Statistical Science 24 (February). https://doi.org/10.1214/08-sts274.
Imai, Kosuke, Gary King, and Carlos Velasco Rivera. 2019. “Replication Data for: "Do Nonpartisan Programmatic Policies Generate Partisan Electoral Effects? Evidence from Two Large Scale Experiments".” https://doi.org/10.7910/DVN/70SNIS.
Imai, Kosuke, Gary King, and Elizabeth A. Stuart. 2008. “Misunderstandings Between Experimentalists and Observationalists about Causal Inference.” Journal of the Royal Statistical Society Series A: Statistics in Society 171 (March). https://doi.org/10.1111/j.1467-985x.2007.00527.x.
Imai, Kosuke, and James Lo. 2021. “Robustness of Empirical Evidence for the Democratic Peace: A Nonparametric Sensitivity Analysis.” International Organization 75. https://doi.org/10.1017/s0020818321000126.
Imai, Kosuke, Bethany Park, and Kenneth F. Greene. 2015. “Using the Predicted Responses from List Experiments as Explanatory Variables in Regression Models.” Political Analysis 23. https://doi.org/10.1093/pan/mpu017.
Imai, Kosuke, and Marc Ratkovic. 2015. “Robust Estimation of Inverse Probability Weights for Marginal Structural Models.” Journal of the American Statistical Association 110 (July). https://doi.org/10.1080/01621459.2014.956872.
Imai, Kosuke, and Aaron Strauss. 2011. “Estimation of Heterogeneous Treatment Effects from Randomized Experiments, with Application to the Optimal Planning of the Get-Out-the-Vote Campaign.” Political Analysis 19. https://doi.org/10.1093/pan/mpq035.
Imai, Kosuke, Dustin Tingley, and Teppei Yamamoto. 2012. “Experimental Designs for Identifying Causal Mechanisms.” Journal of the Royal Statistical Society Series A: Statistics in Society 176 (November). https://doi.org/10.1111/j.1467-985x.2012.01032.x.
Imai, Kosuke, and Teppei Yamamoto. 2010. “Replication Data for: Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis.” https://doi.org/10.7910/DVN/TZOGL9.
INCERTI, TREVOR. 2020. “Corruption Information and Vote Share: A Meta-Analysis and Lessons for Experimental Design.” American Political Science Review 114 (June). https://doi.org/10.1017/s000305542000012x.
“Instrumental Variable Models for Discrete Outcomes.” 2010. Econometrica 78. https://doi.org/10.3982/ecta7315.
Iwashyna, Theodore J., and Edward H. Kennedy. 2013. “Instrumental Variable Analyses. Exploiting Natural Randomness to Understand Causal Mechanisms.” Annals of the American Thoracic Society 10 (June). https://doi.org/10.1513/annalsats.201303-054fr.
Jackman, Simon. 2001. “Multidimensional Analysis of Roll Call Data via Bayesian Simulation: Identification, Estimation, Inference, and Model Checking.” Political Analysis 9 (January). https://doi.org/10.1093/polana/9.3.227.
Jakovljević, Sanja, Hans Degryse, and Steven Ongena. 2020. “INTRODUCTION TO THE SYMPOSIUM ON CONTEMPORARY BANKING RESEARCH: THE USE OF FIXED EFFECTS TO DISENTANGLE LOAN DEMAND FROM LOAN SUPPLY.” Economic Inquiry 58 (January). https://doi.org/10.1111/ecin.12875.
Jensen, Martin Kaae. 2017. “Distributional Comparative Statics.” The Review of Economic Studies 85 (May). https://doi.org/10.1093/restud/rdx021.
Kang, Hyunseung, Yang Jiang, Qingyuan Zhao, and Dylan S. Small. 2020. “Ivmodel: An r Package for Inference and Sensitivity Analysis of Instrumental Variables Models with One Endogenous Variable.” arXiv. https://doi.org/10.48550/ARXIV.2002.08457.
Karlan, Dean, Sneha Stephen, Jonathan Zinman, Keesler Welch, and Violetta Kuzmova. 2016. “Behind the GATE Experiment: Evidence on Effects of and Rationales for Subsidized Entrepreneurship Training.” AEA Randomized Controlled Trials, July. https://doi.org/10.1257/rct.1234.
Katz, Jonathan N., and Gary King. 1999. “A Statistical Model for Multiparty Electoral Data.” American Political Science Review 93 (March). https://doi.org/10.2307/2585758.
Kaye, Dalia Dassa. 1997. “Madrid’s Forgotten Forum: The Middle East Multilaterals.” The Washington Quarterly 20 (March). https://doi.org/10.1080/01636609709550235.
Keele, Luke. 2015. “The Statistics of Causal Inference: A View from Political Methodology.” Political Analysis 23. https://doi.org/10.1093/pan/mpv007.
Keele, Luke J., and Rocío Titiunik. 2015. “Geographic Boundaries as Regression Discontinuities.” Political Analysis 23. https://doi.org/10.1093/pan/mpu014.
Keele, Luke, Randolph T. Stevenson, and Felix Elwert. 2019. “The Causal Interpretation of Estimated Associations in Regression Models.” Political Science Research and Methods 8 (July). https://doi.org/10.1017/psrm.2019.31.
Kennedy, Edward H., Sivaraman Balakrishnan, and Max G’Sell. 2018. “Sharp Instruments for Classifying Compliers and Generalizing Causal Effects.” arXiv. https://doi.org/10.48550/ARXIV.1801.03635.
King, Gary, Robert O. Keohane, and Sidney Verba. 1995. “The Importance of Research Design in Political Science.” American Political Science Review 89 (June). https://doi.org/10.2307/2082445.
KING, GARY, CHRISTOPHER J. L. MURRAY, JOSHUA A. SALOMON, and AJAY TANDON. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98 (February). https://doi.org/10.1017/s000305540400108x.
King, Gary, and Jonathan Wand. 2007. “Comparing Incomparable Survey Responses: Evaluating and Selecting Anchoring Vignettes.” Political Analysis 15. https://doi.org/10.1093/pan/mpl011.
King, Gary, and Langche Zeng. 2001. “Improving Forecasts of State Failure.” World Politics 53 (July). https://doi.org/10.1353/wp.2001.0018.
———. 2006. “The Dangers of Extreme Counterfactuals.” Political Analysis 14. https://doi.org/10.1093/pan/mpj004.
Klößner, Stefan, Ashok Kaul, Gregor Pfeifer, and Manuel Schieler. 2018. “Comparative Politics and the Synthetic Control Method Revisited: A Note on Abadie Et Al. (2015).” Swiss Journal of Economics and Statistics 154 (May). https://doi.org/10.1186/s41937-017-0004-9.
“Knowledge-Based Intelligent Information and Engineering Systems.” 2004. Lecture Notes in Computer Science. https://doi.org/10.1007/b100910.
Koladjo, Babagnidé François, Sylvie Escolano, and Pascale Tubert-Bitter. 2018. “Instrumental Variable Analysis in the Context of Dichotomous Outcome and Exposure with a Numerical Experiment in Pharmacoepidemiology.” BMC Medical Research Methodology 18 (June). https://doi.org/10.1186/s12874-018-0513-y.
Kulkarni, Hrishikesh S., Janet S. Lee, Julie A. Bastarache, Wolfgang M. Kuebler, Gregory P. Downey, Guillermo M. Albaiceta, William A. Altemeier, et al. 2022. “Update on the Features and Measurements of Experimental Acute Lung Injury in Animals: An Official American Thoracic Society Workshop Report.” American Journal of Respiratory Cell and Molecular Biology 66 (February). https://doi.org/10.1165/rcmb.2021-0531st.
Lakoff, George. 1993. “The Contemporary Theory of Metaphor.” Metaphor and Thought, November. https://doi.org/10.1017/cbo9781139173865.013.
Lal, Apoorva, Mac Lockhart, Yiqing Xu, and Ziwen Zu. 2023. “How Much Should We Trust Instrumental Variable Estimates in Political Science? Practical Advice Based on over 60 Replicated Studies.” arXiv. https://doi.org/10.48550/ARXIV.2303.11399.
Lange, Theis, Mette Rasmussen, and Lau Caspar Thygesen. 2013. “Assessing Natural Direct and Indirect Effects Through Multiple Pathways.” American Journal of Epidemiology 179 (November). https://doi.org/10.1093/aje/kwt270.
Lattimore, Finnian, and David Rohde. 2019. “Replacing the Do-Calculus with Bayes Rule.” arXiv. https://doi.org/10.48550/ARXIV.1906.07125.
Laubach, Zachary M., Eleanor J. Murray, Kim L. Hoke, Rebecca J. Safran, and Wei Perng. 2021. “A Biologist’s Guide to Model Selection and Causal Inference.” Proceedings of the Royal Society B: Biological Sciences 288 (January). https://doi.org/10.1098/rspb.2020.2815.
Lebenbaum, Michael, Audrey Laporte, and Claire de Oliveira. 2021. “The Effect of Mental Health on Social Capital: An Instrumental Variable Analysis.” Social Science &Amp; Medicine 272 (March). https://doi.org/10.1016/j.socscimed.2021.113693.
Leeper, Thomas J., Sara B. Hobolt, and James Tilley. 2019. “Measuring Subgroup Preferences in Conjoint Experiments.” Political Analysis 28 (August). https://doi.org/10.1017/pan.2019.30.
Li, Hongbin, Junjian Yi, and Junsen Zhang. 2011. “Estimating the Effect of the One-Child Policy on the Sex Ratio Imbalance in China: Identification Based on the Difference-in-Differences.” Demography 48 (August). https://doi.org/10.1007/s13524-011-0055-y.
Loenneker, Hannah D., Erin M. Buchanan, Ana Martinovici, Maximilian A. Primbs, Mahmoud M. Elsherif, Bradley J. Baker, Leonie A. Dudda, et al. 2024. “We Don’t Know What You Did Last Summer. On the Importance of Transparent Reporting of Reaction Time Data Pre-Processing.” Cortex 172 (March). https://doi.org/10.1016/j.cortex.2023.11.012.
Lokshin, Ilya M. 2015. “Whatever Explains Whatever: The Duhem-Quine Thesis and Conventional Quantitative Methods in Political Science.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2555496.
Lousdal, Mette Lise. 2018. “An Introduction to Instrumental Variable Assumptions, Validation and Estimation.” Emerging Themes in Epidemiology 15 (January). https://doi.org/10.1186/s12982-018-0069-7.
Lubik, Thomas, and Frank Schorfheide. 2005. “A Bayesian Look at New Open Economy Macroeconomics.” NBER Macroeconomics Annual 20 (January). https://doi.org/10.1086/ma.20.3585427.
Lunt, M., D. Solomon, K. Rothman, R. Glynn, K. Hyrich, D. P. M. Symmons, and T. Sturmer. 2009. “Different Methods of Balancing Covariates Leading to Different Effect Estimates in the Presence of Effect Modification.” American Journal of Epidemiology 169 (January). https://doi.org/10.1093/aje/kwn391.
Lyall, Jason. 2014. “Process Tracing, Causal Inference, and Civil War.” Process Tracing, November. https://doi.org/10.1017/cbo9781139858472.010.
LYALL, JASON, YANG-YANG ZHOU, and KOSUKE IMAI. 2019. “Can Economic Assistance Shape Combatant Support in Wartime? Experimental Evidence from Afghanistan.” American Political Science Review 114 (November). https://doi.org/10.1017/s0003055419000698.
MALESKY, EDMUND J., CUONG VIET NGUYEN, and ANH TRAN. 2014. “The Impact of Recentralization on Public Services: A Difference-in-Differences Analysis of the Abolition of Elected Councils in Vietnam.” American Political Science Review 108 (February). https://doi.org/10.1017/s0003055413000580.
Marshall, John. 2016. “Coarsening Bias: How Coarse Treatment Measurement Upwardly Biases Instrumental Variable Estimates.” Political Analysis 24. https://doi.org/10.1093/pan/mpw007.
Martin, Gregory J., and Ali Yurukoglu. 2017. “Bias in Cable News: Persuasion and Polarization.” American Economic Review 107 (September). https://doi.org/10.1257/aer.20160812.
Matousek, Jindrich, Tomas Havranek, and Zuzana Irsova. 2021. “Individual Discount Rates: A Meta-Analysis of Experimental Evidence.” Experimental Economics 25 (May). https://doi.org/10.1007/s10683-021-09716-9.
May, Carl R, Tracy Finch, Luciana Ballini, Anne MacFarlane, Frances Mair, Elizabeth Murray, Shaun Treweek, and Tim Rapley. 2011. “Evaluating Complex Interventions and Health Technologies Using Normalization Process Theory: Development of a Simplified Approach and Web-Enabled Toolkit.” BMC Health Services Research 11 (September). https://doi.org/10.1186/1472-6963-11-245.
McElreath, Richard, and Paul E. Smaldino. 2015. “Replication, Communication, and the Population Dynamics of Scientific Discovery.” PLOS ONE 10 (August). https://doi.org/10.1371/journal.pone.0136088.
Medland, Sarah E., and Peter K. Hatemi. 2009. “Political Science, Biometric Theory, and Twin Studies: A Methodological Introduction.” Political Analysis 17. https://doi.org/10.1093/pan/mpn016.
Mikhaylov, Slava, Michael Laver, and Kenneth R. Benoit. 2012. “Coder Reliability and Misclassification in the Human Coding of Party Manifestos.” Political Analysis 20. https://doi.org/10.1093/pan/mpr047.
Moland, Martin, and Asimina Michailidou. 2023. “Testing Causal Inference Between Social Media News Reliance and (Dis)trust of EU Institutions with an Instrumental Variable Approach: Lessons from a Null-Hypothesis Case.” Political Studies Review, July. https://doi.org/10.1177/14789299231183574.
Montgomery, Jacob M., Brendan Nyhan, and Michelle Torres. 2018. “Replication Data for: How Conditioning on Posttreatment Variables Can Ruin Your Experiment and What to Do about It.” https://doi.org/10.7910/DVN/EZSJ1S.
Morgan, Stephen L., and Jennifer J. Todd. 2008. “6. A Diagnostic Routine for the Detection of Consequential Heterogeneity of Causal Effects.” Sociological Methodology 38 (July). https://doi.org/10.1111/j.1467-9531.2008.00204.x.
Murray, Fiona, Philippe Aghion, Mathias Dewatripont, Julian Kolev, and Scott Stern. 2016. “Of Mice and Academics: Examining the Effect of Openness on Innovation.” American Economic Journal: Economic Policy 8 (February). https://doi.org/10.1257/pol.20140062.
Neyapti, Bilin. 2010. “Fiscal Decentralization and Deficits: International Evidence.” European Journal of Political Economy 26 (June). https://doi.org/10.1016/j.ejpoleco.2010.01.001.
Nguyen, Quynh C., Theresa L. Osypuk, Nicole M. Schmidt, M. Maria Glymour, and Eric J. Tchetgen Tchetgen. 2015. “Practical Guidance for Conducting Mediation Analysis with Multiple Mediators Using Inverse Odds Ratio Weighting.” American Journal of Epidemiology 181 (February). https://doi.org/10.1093/aje/kwu278.
Niehaus, Paul, and Sandip Sukhtankar. 2013. “Corruption Dynamics: The Golden Goose Effect.” American Economic Journal: Economic Policy 5 (November). https://doi.org/10.1257/pol.5.4.230.
Nieman, Mark David. 2015. “Replication Data for: Statistical Analysis of Strategic Interaction with Unobserved Player Actions: Introducing a Strategic Probit with Partial Observability.” :unav. https://doi.org/10.7910/DVN/28662.
Nikolov, Plamen, and Alan Adelman. 2020. “Pension Policies, Retirement and Human Capital Depreciation in Late Adulthood.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3747450.
Noe, Thomas. 2020. “Comparing the Chosen: Selection Bias When Selection Is Competitive.” Journal of Political Economy 128 (January). https://doi.org/10.1086/704076.
Okunogbe, Oyebola, and Victor Pouliquen. 2022. “Technology, Taxation, and Corruption: Evidence from the Introduction of Electronic Tax Filing.” American Economic Journal: Economic Policy 14 (February). https://doi.org/10.1257/pol.20200123.
Oneal, John R., and Bruce Russett. 2001. “Clear and Clean: The Fixed Effects of the Liberal Peace.” International Organization 55. https://doi.org/10.1162/00208180151140649.
———. 2005. “Rule of Three, Let It Be? When More Really Is Better.” Conflict Management and Peace Science 22 (September). https://doi.org/10.1080/07388940500339209.
Ostrovsky, Michael, and Michael Schwarz. 2010. “Information Disclosure and Unraveling in Matching Markets.” American Economic Journal: Microeconomics 2 (May). https://doi.org/10.1257/mic.2.2.34.
Page, Matthew J., Jonathan A. C. Sterne, Julian P. T. Higgins, and Matthias Egger. 2020. “Investigating and Dealing with Publication Bias and Other Reporting Biases in Meta‐analyses of Health Research: A Review.” Research Synthesis Methods 12 (November). https://doi.org/10.1002/jrsm.1468.
Page, Scott E. 2006. “Path Dependence.” Quarterly Journal of Political Science 1 (January). https://doi.org/10.1561/100.00000006.
Palm-Forster, Leah H., and Kent D. Messer. 2021. “Experimental and Behavioral Economics to Inform Agri-Environmental Programs and Policies.” Handbook of Agricultural Economics. https://doi.org/10.1016/bs.hesagr.2021.10.006.
Pang, Xun, Licheng Liu, and Yiqing Xu. 2021. “A Bayesian Alternative to Synthetic Control for Comparative Case Studies.” Political Analysis 30 (July). https://doi.org/10.1017/pan.2021.22.
“Partial Identification of Probability Distributions.” 2003. Springer Series in Statistics. https://doi.org/10.1007/b97478.
Pearce, Neil, and Debbie A Lawlor. 2016. “Causal Inference—so Much More Than Statistics.” International Journal of Epidemiology 45 (December). https://doi.org/10.1093/ije/dyw328.
Pearl, J. 2011. “Invited Commentary: Understanding Bias Amplification.” American Journal of Epidemiology 174 (October). https://doi.org/10.1093/aje/kwr352.
Pearl, Judea. 2003. “Statistics and Causal Inference: A Review.” Test 12 (December). https://doi.org/10.1007/bf02595718.
———. 2010. “3. The Foundations of Causal Inference.” Sociological Methodology 40 (August). https://doi.org/10.1111/j.1467-9531.2010.01228.x.
———. 2013. “Linear Models: A Useful ‘Microscope’ for Causal Analysis.” Journal of Causal Inference 1 (May). https://doi.org/10.1515/jci-2013-0003.
———. 2018. “Does Obesity Shorten Life? Or Is It the Soda? On Non-Manipulable Causes.” Journal of Causal Inference 6 (August). https://doi.org/10.1515/jci-2018-2001.
Pemstein, Daniel, Stephen A. Meserve, and James Melton. 2010. “Democratic Compromise: A Latent Variable Analysis of Ten Measures of Regime Type.” Political Analysis 18. https://doi.org/10.1093/pan/mpq020.
Pfister, Niklas, Peter Bühlmann, Bernhard Schölkopf, and Jonas Peters. 2017. “Kernel-Based Tests for Joint Independence.” Journal of the Royal Statistical Society Series B: Statistical Methodology 80 (May). https://doi.org/10.1111/rssb.12235.
Pham, Thai T., and Yuanyuan Shen. 2017. “A Deep Causal Inference Approach to Measuring the Effects of Forming Group Loans in Online Non-Profit Microfinance Platform.” arXiv. https://doi.org/10.48550/ARXIV.1706.02795.
Phillips, Carl V. 2004. “Publication Bias in Situ.” BMC Medical Research Methodology 4 (August). https://doi.org/10.1186/1471-2288-4-20.
Pieters, Hannah, Daniele Curzi, Alessandro Olper, and Johan Swinnen. 2016. “Effect of Democratic Reforms on Child Mortality: A Synthetic Control Analysis.” The Lancet Global Health 4 (September). https://doi.org/10.1016/s2214-109x(16)30104-8.
Pizzi, C., B. De Stavola, F. Merletti, R. Bellocco, I. dos Santos Silva, N. Pearce, and L. Richiardi. 2010. “Sample Selection and Validity of Exposure-Disease Association Estimates in Cohort Studies.” Journal of Epidemiology &Amp; Community Health 65 (September). https://doi.org/10.1136/jech.2009.107185.
Plagborg-Møller, Mikkel, and Christian K. Wolf. 2020. “Instrumental Variable Identification of Dynamic Variance Decompositions.” arXiv. https://doi.org/10.48550/ARXIV.2011.01380.
Powell, David. 2016. “Synthetic Control Estimation Beyond Case Studies: Does the Minimum Wage Reduce Employment?” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2791789.
———. 2018. “Imperfect Synthetic Controls: Did the Massachusetts Health Care Reform Save Lives?” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3192710.
Prelec, Dražen. 2004. “A Bayesian Truth Serum for Subjective Data.” Science 306 (October). https://doi.org/10.1126/science.1102081.
Ragin, Charles C. 2006. “Set Relations in Social Research: Evaluating Their Consistency and Coverage.” Political Analysis 14. https://doi.org/10.1093/pan/mpj019.
Reichenheim, Michael E, and Evandro SF Coutinho. 2010. “Measures and Models for Causal Inference in Cross-Sectional Studies: Arguments for the Appropriateness of the Prevalence Odds Ratio and Related Logistic Regression.” BMC Medical Research Methodology 10 (July). https://doi.org/10.1186/1471-2288-10-66.
Richiardi, L., R. Bellocco, and D. Zugna. 2013. “Mediation Analysis in Epidemiology: Methods, Interpretation and Bias.” International Journal of Epidemiology 42 (September). https://doi.org/10.1093/ije/dyt127.
Roediger, Henry L., Michelle L. Meade, and Erik T. Bergman. 2001. “Social Contagion of Memory.” Psychonomic Bulletin &Amp; Review 8 (June). https://doi.org/10.3758/bf03196174.
Rohlfing, Ingo, and Christina Isabel Zuber. 2019. “Check Your Truth Conditions! Clarifying the Relationship Between Theories of Causation and Social Science Methods for Causal Inference.” Sociological Methods &Amp; Research 50 (February). https://doi.org/10.1177/0049124119826156.
Rosenfeld, Bryn, Kosuke Imai, and Jacob N. Shapiro. 2015. “An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions.” American Journal of Political Science 60 (August). https://doi.org/10.1111/ajps.12205.
Ross, Cody T., Bruce Winterhalder, and Richard McElreath. 2020. “Racial Disparities in Police Use of Deadly Force Against Unarmed Individuals Persist After Appropriately Benchmarking Shooting Data on Violent Crime Rates.” Social Psychological and Personality Science 12 (June). https://doi.org/10.1177/1948550620916071.
Roth, Jonathan, and Pedro H. C. Sant’Anna. 2023. “When Is Parallel Trends Sensitive to Functional Form?” Econometrica 91. https://doi.org/10.3982/ecta19402.
Rothman, Kenneth J., and Sander Greenland. 2005. “Causation and Causal Inference in Epidemiology.” American Journal of Public Health 95 (July). https://doi.org/10.2105/ajph.2004.059204.
Round, Jeff, Robyn Drake, Edward Kendall, Rachael Addicott, Nicky Agelopoulos, and Louise Jones. 2013. “Evaluating a Complex System-Wide Intervention Using the Difference in Differences Method: The Delivering Choice Programme.” BMJ Supportive &Amp; Palliative Care 5 (August). https://doi.org/10.1136/bmjspcare-2012-000285.
Rubin, Donald B. 2005. “Causal Inference Using Potential Outcomes.” Journal of the American Statistical Association 100 (March). https://doi.org/10.1198/016214504000001880.
Saridakis, George, Yanqing Lai, Rebeca I. Muñoz Torres, and Stephen Gourlay. 2018. “Exploring the Relationship Between Job Satisfaction and Organizational Commitment: An Instrumental Variable Approach.” The International Journal of Human Resource Management 31 (January). https://doi.org/10.1080/09585192.2017.1423100.
Scheel, Anne M., Mitchell R. M. J. Schijen, and Daniël Lakens. 2021. “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” Advances in Methods and Practices in Psychological Science 4 (April). https://doi.org/10.1177/25152459211007467.
Schooling, C. Mary, and Heidi E. Jones. 2018. “Clarifying Questions about ‘Risk Factors’: Predictors Versus Explanation.” Emerging Themes in Epidemiology 15 (August). https://doi.org/10.1186/s12982-018-0080-z.
Schulz, Kenneth F. 1995. “Empirical Evidence of Bias.” JAMA 273 (February). https://doi.org/10.1001/jama.1995.03520290060030.
Sekhon, Jasjeet S. 2004. “Quality Meets Quantity: Case Studies, Conditional Probability, and Counterfactuals.” Perspectives on Politics 2 (June). https://doi.org/10.1017/s1537592704040150.
Shams, Ladan, and Ulrik R. Beierholm. 2010. “Causal Inference in Perception.” Trends in Cognitive Sciences 14 (September). https://doi.org/10.1016/j.tics.2010.07.001.
Sharma, Amit, and Emre Kiciman. 2020. “DoWhy: An End-to-End Library for Causal Inference.” arXiv. https://doi.org/10.48550/ARXIV.2011.04216.
Shi, Xu, Kendrick Li, Wang Miao, Mengtong Hu, and Eric Tchetgen Tchetgen. 2021. “Theory for Identification and Inference with Synthetic Controls: A Proximal Causal Inference Framework.” arXiv. https://doi.org/10.48550/ARXIV.2108.13935.
Shirai, Koji. 2010. “Monotone Comparative Statics of Characteristic Demand.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1553547.
Sills, Erin O., Diego Herrera, A. Justin Kirkpatrick, Amintas Brandão, Rebecca Dickson, Simon Hall, Subhrendu Pattanayak, et al. 2015. “Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.” PLOS ONE 10 (July). https://doi.org/10.1371/journal.pone.0132590.
Sloman, Steven. 2014. “Comments on Quantum Probability Theory.” Topics in Cognitive Science 6 (January). https://doi.org/10.1111/tops.12072.
Sohn, Michael B., and Hongzhe Li. 2019. “Compositional Mediation Analysis for Microbiome Studies.” The Annals of Applied Statistics 13 (March). https://doi.org/10.1214/18-aoas1210.
Spencer, R. Mason, and Robert W. Weisberg. 1986. “Context-Dependent Effects on Analogical Transfer.” Memory &Amp; Cognition 14 (September). https://doi.org/10.3758/bf03197019.
Stanovich, Keith E. 1988. “Explaining the Differences Between the Dyslexic and the Garden-Variety Poor Reader.” Journal of Learning Disabilities 21 (December). https://doi.org/10.1177/002221948802101003.
Steel, Daniel. 2004. “Social Mechanisms and Causal Inference.” Philosophy of the Social Sciences 34 (March). https://doi.org/10.1177/0048393103260775.
Stock, James H, and Francesco Trebbi. 2003. “Retrospectives: Who Invented Instrumental Variable Regression?” Journal of Economic Perspectives 17 (August). https://doi.org/10.1257/089533003769204416.
Stojmenovska, Dragana, Thijs Bol, and Thomas Leopold. 2019. “Teaching Replication to Graduate Students.” Teaching Sociology 47 (August). https://doi.org/10.1177/0092055x19867996.
Stommes, Drew, P. M. Aronow, and Fredrik Sävje. 2021. “On the Reliability of Published Findings Using the Regression Discontinuity Design in Political Science.” arXiv. https://doi.org/10.48550/ARXIV.2109.14526.
Tan, Xiaoqing, Shu Yang, Wenyu Ye, Douglas E. Faries, Ilya Lipkovich, and Zbigniew Kadziola. 2022. “Combining Doubly Robust Methods and Machine Learning for Estimating Average Treatment Effects for Observational Real-World Data.” arXiv. https://doi.org/10.48550/ARXIV.2204.10969.
Tappin, Ben M, Gordon Pennycook, and David G Rand. 2020. “Thinking Clearly about Causal Inferences of Politically Motivated Reasoning: Why Paradigmatic Study Designs Often Undermine Causal Inference.” Current Opinion in Behavioral Sciences 34 (August). https://doi.org/10.1016/j.cobeha.2020.01.003.
Taylor, Sean J., and Dean Eckles. 2017. “Randomized Experiments to Detect and Estimate Social Influence in Networks.” arXiv. https://doi.org/10.48550/ARXIV.1709.09636.
Tchetgen, Eric J Tchetgen, Andrew Ying, Yifan Cui, Xu Shi, and Wang Miao. 2020. “An Introduction to Proximal Causal Learning.” arXiv. https://doi.org/10.48550/ARXIV.2009.10982.
“The Racial Record of Johns Hopkins University.” 1999. The Journal of Blacks in Higher Education. https://doi.org/10.2307/2999371.
Thiem, Alrik. 2019. “Beyond the Facts: Limited Empirical Diversity and Causal Inference in Qualitative Comparative Analysis.” Sociological Methods &Amp; Research 51 (November). https://doi.org/10.1177/0049124119882463.
Thoemmes, Felix, and Karthika Mohan. 2015. “Graphical Representation of Missing Data Problems.” Structural Equation Modeling: A Multidisciplinary Journal 22 (January). https://doi.org/10.1080/10705511.2014.937378.
Tong, Christopher. 2019. “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” The American Statistician 73 (March). https://doi.org/10.1080/00031305.2018.1518264.
Tricco, Andrea C., Jennifer Tetzlaff, Margaret Sampson, Dean Fergusson, Elise Cogo, Tanya Horsley, and David Moher. 2008. “Few Systematic Reviews Exist Documenting the Extent of Bias: A Systematic Review.” Journal of Clinical Epidemiology 61 (May). https://doi.org/10.1016/j.jclinepi.2007.10.017.
Trinder, Mark, Keith R. Walley, John H. Boyd, and Liam R. Brunham. 2020. “Causal Inference for Genetically Determined Levels of High-Density Lipoprotein Cholesterol and Risk of Infectious Disease.” Arteriosclerosis, Thrombosis, and Vascular Biology 40 (January). https://doi.org/10.1161/atvbaha.119.313381.
Tropp, Joel A. 2015. “An Introduction to Matrix Concentration Inequalities.” arXiv. https://doi.org/10.48550/ARXIV.1501.01571.
VanderWeele, T. J., and S. Vansteelandt. 2010. “Odds Ratios for Mediation Analysis for a Dichotomous Outcome.” American Journal of Epidemiology 172 (October). https://doi.org/10.1093/aje/kwq332.
Vu, Patrick. 2022. “Can the Replication Rate Tell Us about Publication Bias?” arXiv. https://doi.org/10.48550/ARXIV.2206.15023.
Wang, Zhenqian, and Jiawen Lu. 2022. “Sex-Specific Exposures and Sex-Combined Outcomes in Two-Sample Mendelian Randomization May Mislead the Causal Inference.” Arthritis Research &Amp; Therapy 24 (October). https://doi.org/10.1186/s13075-022-02922-7.
Wilcox, A. J., C. R. Weinberg, and O. Basso. 2011. “On the Pitfalls of Adjusting for Gestational Age at Birth.” American Journal of Epidemiology 174 (September). https://doi.org/10.1093/aje/kwr230.
Wirtz, Veronika J., Yared Santa-Ana-Tellez, Edson Servan-Mori, and Leticia Avila-Burgos. 2012. “Heterogeneous Effects of Health Insurance on Out-of-Pocket Expenditure on Medicines in Mexico.” Value in Health 15 (July). https://doi.org/10.1016/j.jval.2012.01.006.
Wratil, Christopher, and Sara B Hobolt. 2019. “Public Deliberations in the Council of the European Union: Introducing and Validating DICEU.” European Union Politics 20 (April). https://doi.org/10.1177/1465116519839152.
Wu, Zunyou, and Jennifer M. McGoogan. 2020. “Characteristics of and Important Lessons from the Coronavirus Disease 2019 (COVID-19) Outbreak in China.” JAMA 323 (April). https://doi.org/10.1001/jama.2020.2648.
Ye, Xi, and Greg Durrett. 2022. “The Unreliability of Explanations in Few-Shot Prompting for Textual Reasoning.” arXiv. https://doi.org/10.48550/ARXIV.2205.03401.
Ying, Andrew, Wang Miao, Xu Shi, and Eric J. Tchetgen Tchetgen. 2021. “Proximal Causal Inference for Complex Longitudinal Studies.” arXiv. https://doi.org/10.48550/ARXIV.2109.07030.
Zhao, Shandong, David A van Dyk, and Kosuke Imai. 2020. “Propensity Score-Based Methods for Causal Inference in Observational Studies with Non-Binary Treatments.” Statistical Methods in Medical Research 29 (March). https://doi.org/10.1177/0962280219888745.
Zhu, Yaochen, Jing Ma, and Jundong Li. 2023. “Causal Inference in Recommender Systems: A Survey of Strategies for Bias Mitigation, Explanation, and Generalization.” arXiv. https://doi.org/10.48550/ARXIV.2301.00910.