Automated Syllabus of Meta-Science Papers

Built by Rex W. Douglass @RexDouglass ; Github ; LinkedIn

Papers curated by hand, summaries and taxonomy written by LLMs.

Submit paper to add for review

Addressing Challenges in Observational Studies

> Improving Estimation Techniques for Complex Data Scenarios

>> Competing Risks & Hierarchical Regression Models
  • Consider using hierarchical regression models for analyzing multiple outcomes simultaneously, as they offer increased statistical power and precision compared to traditional separate regression models while allowing for heterogeneous effects across outcomes. (David B. Richardson et al. 2015a)

  • Analyze competing risks using cause-specific hazard functions instead of latent failure times, as the former are identifiable and interpretable quantities that allow for the estimation of treatment or exposure effects on specific failure types, the study of interrelationships among failure types, and the estimation of failure rates for some causes given the removal of certain other failure types. (Tai et al. 2001)

>> Addressing Biases in Time-Dependent Covariates
  • Avoid using future data when defining covariates in a Cox model, as doing so can introduce significant bias and lead to erroneous conclusions. (Zhang et al. 2018)

  • Consider using the case-time-control method instead of the case-crossover method for estimating the effect of a dichotomous predictor on a nonrepeated event, especially when the distribution of the covariate changes over time, as it allows for the inclusion of a control for time and avoids potential biases caused by monotonic functions of time. (Allison and Christakis 2006)

>> Polynomial Approximation for Time Dependence in Binary Data
  • Consider using a simple cubic polynomial approximation to model time dependence in binary data, as it addresses the challenges of complete or quasi-complete separation and inefficiency associated with time dummies, and offers greater interpretability compared to splines. (Carter and Signorino 2010)
>> Markov Transition Model Bias Mitigation with Binary Dependent Variables
  • Either set ongoing years to missing or use the untransformed dependent variable while estimating a first-order Markov transition model to avoid biased results and poor confidence interval coverage caused by transforming a binary dependent variable by setting ongoing years to zero. (McGrath 2015)

> Avoiding Bias in Estimating Causal Effects

>> Avoiding Bias from Controlling Post-Treatment or Intermediate Variables
  • Carefully consider the causal relationships between variables and avoid adjusting for variables that are descendants of an intermediate variable, as doing so can introduce bias into estimates of causal effects. (Howards et al. 2012)

  • Avoid controlling for post-treatment variables that are affected by the treatment, as doing so can introduce bias and distort estimates of the treatment effect. (Sartwell and Stark 1991)

>> Collider Variable Adjustment Caution in Preterm Birth Research
  • Be cautious when adjusting for gestational age in studies examining the relationship between preterm birth and infant health outcomes, as gestational age can act as a collider variable leading to biased estimates of causal effects. (Wilcox, Weinberg, and Basso 2011)
>> Integer Programming Techniques for Nonbipartite Matching
  • Consider using integer programming techniques for nonbipartite matching in observational studies, as it provides greater flexibility compared to traditional network optimization techniques, allowing for fine and near-fine balance for several nominal variables, optimal subset matching, and forcing balance on means simultaneously, ultimately leading to stronger instrumental variables and improved causal inferences. (Zubizarreta et al. 2013)
>> Difference-in-Differences with Parallel Trend Assumption Evaluation
  • Consider applying difference-in-differences methods when analyzing observational data to account for unmeasured time-invariant confounders, but they should carefully evaluate the assumption of parallel trends in county attributes. (Grabich et al. 2015)

> Improved Demographic Estimation Techniques

>> Advanced Modeling Strategies for Enhanced Prediction Accuracy
  • Consider using correlated smoothing priors for stratum-specific time effects in multivariate APC models, which allows for the sharing of information across strata and can improve the precision of estimates. (Riebler, Held, and Rue 2012)

  • Consider employing a difference-in-differences approach to remove the influence of confounding variables that affect both treatment and control groups equally, allowing for clearer observation of the effects of interest. (Preston and Wang 2006)

  • Utilize a combination of demographic models and statistical time series methods to create a rich yet parsimonious framework for forecasting mortality, while providing probabilistic confidence regions for your predictions. (R. D. Lee and Carter 1992)

>> Hierarchical Models for Enhancing Demographic Data Accuracy
  • Consider leveraging clusters of areas with similar data quality to create a hierarchical structure in your compound Poisson model, requiring only prior information about the reporting probability in areas with the best data quality for model identifiability. (Oliveira et al. 2022)

  • Utilize a Bayesian integrated population model to simultaneously estimate adjustment factors for censuses, completeness of death and birth counts, and migration estimates while considering uncertainty in the data, allowing for consistent demographic estimates that align with the population dynamics model and the structure and regularities of demographic rates. (Alexander and Alkema 2018)

  • Consider using a Bayesian hierarchical model with penalized B-spline regression to estimate under-five mortality rates, as it enables the flexible capture of changes over time while addressing biases in data series through the inclusion of a multilevel model and improving spline extrapolations via logarithmic pooling of the posterior predictive distribution of country-specific changes in spline coefficients with observed changes on the global level. (Alkema and New 2014)

>> Bayesian Hierarchical Models with Optimized Prior Distributions
  • Carefully consider the choice of prior distributions in Bayesian hierarchical models, as common practices may lead to incorrect formalizations of prior knowledge and suboptimal estimates. Instead, the authors propose a novel approach that allows for the inclusion of different explanatory variables in a time series regression for each cross-section, while still borrowing strength from one regression to improve the estimation of all, and that requires fewer adjustable parameters than traditional Bayesian methods. (NA?)

> Addressing Bias and Confounding in Nonrandomized Studies

>> Addressing Misconceptions and Limitations in Traditional Statistics
  • Avoid misinterpretation of p-values by recognizing them as measures of the compatibility of data with the null hypothesis rather than as direct evidence supporting alternative hypotheses, and instead consider additional metrics like confidence intervals and effect sizes to better understand the practical significance of your findings. (Rijn et al. 2017)

  • Recognize the inherent limitations of attempting to determine the causes of effects (CoE) solely based on statistical evidence, as this often requires counterfactual reasoning and the assumption of unverifiable conditions, resulting in potentially arbitrary and uncertain conclusions. (Dawid, Musio, and Fienberg 2016)

  • Recognize the limitations of inferential statistics in nonrandomized studies, as your probabilistic interpretations assume random assignment, and instead emphasize data description and summarization or adopt more realistic probability models. (Sander Greenland 1990)

>> Unmeasured Confounding & Alternative Approaches using DAGs
  • Be aware that standard methods of estimating direct effects, such as stratification or regression adjustment, do not always provide accurate estimates, particularly when there are unmeasured confounders that affect both the intermediate variable and the outcome, and that alternative approaches like directed acyclic graphs (DAGs) can help identify and address these issues. (Stephen R. Cole and Hernán 2002)
>> Three-Way Fixed Effects Model for Longitudinal Data Analysis
  • Employ a three-way fixed effects model when analyzing longitudinal data involving multiple groups (such as countries and diseases) to accurately estimate the impact of an intervention (like new drug launches) while controlling for time-invariant group characteristics and common shocks. (Hausman 2001)
>> Instrumental Variable Analysis: Selection, Assumptions, Validation
  • Carefully consider and attempt to validate all four instrumental variable assumptions - relevance, exclusion restriction, exchangeability, and monotonicity - before using an instrument to estimate causal effects in observational studies. (Lousdal 2018)

  • Consider using instrumental variable analysis to address endogeneity issues in nonrandomized studies, which occurs when the treatment or exposure of interest is influenced by the same factors as the response variable, leading to biased estimates of the effect of the exposure. To ensure the validity of the analysis, three fundamental assumptions must be met: relevance, exogeneity, and exclusion restriction. (Bagiella et al. 2015)

  • Carefully consider and attempt to identify any potential instrument-outcome confounders when using instrumental variable analysis, as failure to do so could lead to biased estimates of causal effects. (Garabedian et al. 2014)

  • Carefully select instrumental variables (IVs) that meet the three critical assumptions of being associated with treatment assignment, having no direct association with the outcome, and not being associated with measured confounders, while acknowledging the challenges of identifying suitable IVs and the limitations of the method, particularly its reliance on the assumption of monotonicity. (Iwashyna and Kennedy 2013)

  • Consider using instrumental variable analysis in observational studies to address unmeasured confounding, as it mimics the benefits of random assignment in RCTs and may provide more accurate estimates of treatment effects. (Stel et al. 2012)

  • Consider using propensity score methods, particularly due to your ability to reduce bias in estimating common measures of treatment effect, but they must carefully evaluate the balance achieved through these methods and understand the limitations of instrumental variables before applying them. (Austin 2006)

>> Assumptions in Preference-Based IV Methods
  • Carefully consider the assumptions of exclusion restriction and monotonicity when using preference-based instrumental variable methods, as violations of these assumptions can lead to biased estimates of treatment effects due to unobserved confounding or treatment effect heterogeneity. (Brookhart and Schneeweiss 2007)
>> Instrumental Variable Approaches and Their Limitations
  • Be aware that the two-stage predictor substitution (2SPS) method for estimating the causal odds ratio using instrumental variable logistic regression produces asymptotically biased estimates when there is no unmeasured confounding, and this bias increases with increasing unmeasured confounding. (Cai, Small, and Have 2011)

  • Carefully choose the appropriate instrumental variable method for your specific study design and data type, considering potential issues such as model misspecification, violation of distributional assumptions, and the interpretability of results, particularly when working with dichotomous treatments and outcomes. (Rassen et al. 2008)

>> Differentiating Prediction vs Explanation Goals in Biomedical Research
  • Clearly distinguish between prediction and explanation objectives in observational biomedical studies, as they require different approaches, interpretations, and levels of evidence, and conflating them can result in misleading conclusions and wasted resources. (Schooling and Jones 2018)
>> Variance Stabilization Transformations for Improving Power
  • Consider using variance stabilizing transformations (VSTs) to improve statistical power and reduce bias when analyzing non-normal data, as demonstrated through various examples such as Poisson models, binomial tests, and chi-squared statistics. (NA?)

> Bias Control Techniques in Experimental & Observational Research

>> Bias Mitigation Strategies in Randomized Trials
  • Critically appraise the control of bias in individual trials, as the influence of different components like adequate randomization, blinding, and follow-up cannot be predicted. (Gluud 2006)

  • Ensure proper allocation concealment in your studies, as inadequate or unclear concealment can lead to biased and exaggerated estimates of treatment effects. (Schulz 1995)

>> Quantifying Residual Biases via Sensitivity Analysis
  • Carefully consider the possibility of bias in non-experimental studies, particularly when evaluating small associations, but even large associations may be affected by bias, and techniques like sensitivity analysis can help quantify the impact of residual biases and inform judgement about causality. (“The Racial Record of Johns Hopkins University” 1999)
>> Propensity Score Balancing Method Selection
  • Carefully consider the choice of propensity score balancing method when working with observational studies, as different methods such as stratification, weighting, and matching can yield significantly different effect estimates even when effectively reducing covariate imbalances. (Lunt et al. 2009)

> Addressing Confounding, Measurement Error, and Model Selection

>> Qualitative vs Quantitative Analysis Tradeoffs in Dose-Response
  • Carefully weigh the tradeoff between the simplicity and robustness of qualitative analysis against its potential loss of efficiency compared to quantitative analysis, particularly when studying dose-response relations in case-control studies. (Zhao and Kolonel 1992)
>> Improving Estimation Techniques for Interaction Effects and Standardization
  • Consider using a marginal structural binomial regression model to estimate standardized risk or prevalence ratios and differences, as this approach addresses issues related to model convergence and allows for the evaluation of departures from additivity in the joint effects of two exposures. (David B. Richardson et al. 2015b)

  • Carefully consider the underlying assumptions and goals of your study before selecting a statistical model, particularly when using cross-sectional data for causal inference, as the choice of model can significantly impact the accuracy of estimates for causal parameters such as the Incidence Density Ratio (IDR) or the Cumulative Incidence Ratio (CIR). (Reichenheim and Coutinho 2010)

  • Be cautious when interpreting interaction effects in case-control studies, as the fundamental interaction parameter cannot be directly estimated, leading to potential biases in commonly used surrogate measures like RERI and AP. Instead, the use of the synergy index (S) is recommended, as it is less prone to variation across strata defined by covariates and can be tested for significance using a linear odds model. (Skrondal 2003)

>> Addressing Measurement Error
  • Carefully account for potential sources of residual and unmeasured confounding, especially when dealing with multiple confounders, as even small amounts of measurement error or omitted variables can significantly bias exposure effect estimates. (Fewell, Smith, and Sterne 2007)

  • Carefully consider the potential for measurement error in your studies, particularly when dealing with strong confounders, as even moderate levels of error can significantly distort the observed relationships between variables. (Marshall and Hastrup 1996)

>> Addressing Confounding via Instrumental Variables & Vibration of Effects
  • Quantify the vibration of effects (VoE) when estimating observational associations, which refers to the degree of instability in the estimated association across various model specifications. Large VoE indicates caution in making claims about observational associations, suggesting that the choice of model specification significantly impacts the results. (Patel, Burford, and Ioannidis 2015)

  • Consider using instrumental variable (IV) methods to address potential confounding in observational studies, particularly when dealing with non-compliance in randomized trials, as IV methods can provide estimates of the causal effect of treatment receipt among compliant individuals, which may differ from intention-to-treat estimates. (Sander Greenland 2000)

>> Assumptions and Interpretations in Estimating Exposure Effects
  • Carefully differentiate and explicitly state the type of causal effect (i.e., total, direct, or indirect) being estimated for each variable included in a statistical model, especially when presenting effect estimates for secondary risk factors alongside the primary exposure effect estimate in a Table 2 format, as failure to do so can lead to misinterpretation and confusion. (Westreich and Greenland 2013)

  • Carefully consider the assumptions of consistency, exchangeability, positivity, and no model misspecification when using inverse probability weighting to estimate exposure effects, as failure to meet these assumptions can lead to biased estimates. (S. R. Cole and Hernan 2008)

>> Propensity Score Methods: Advantages & Artifactual Effect Modification
  • Carefully consider the choice of propensity score estimation method in case-control and case-cohort studies, as certain methods (such as the subcohort, weighted case-control, and control methods) can induce artifactual effect modification of the odds ratio by propensity score, while others (such as the unweighted case-control and modeled control methods) do not exhibit this issue. (Mansson et al. 2007)

  • Consider using propensity score methods in observational studies where there are few events relative to the number of confounders, as these methods can provide more precise estimates and reduce bias compared to traditional logistic regression, particularly when the association between the exposure and outcome is strong. (Cepeda 2003)

>> Optimal Strategies for Selecting and Conditioning on Covariates
  • Carefully consider the timing of covariate measurements and aim to control for covariates in the wave prior to the primary exposure of interest, in order to minimize the risk of inadvertently controlling for mediators instead of confounders. (VanderWeele 2019)

  • Prioritize conditioning on output-related covariates rather than exposure-related ones, as they tend to produce lower bias estimates, especially in the context of unmeasured confounders. (Pearl 2011)

  • Prioritize selecting confounders based on your relationship with the exposure, rather than your direct association with the outcome, in order to improve the accuracy and precision of causal effect estimates in observational studies. (Vansteelandt, Bekaert, and Claeskens 2010)

  • Avoid using significance testing for confounder selection, as it tends to delete important confounders in small studies and fails to account for selection effects on subsequent tests and confidence intervals. Instead, researchers should consider using modern adjustment techniques, such as shrinkage estimation and exposure modeling, to control for multiple confounders simultaneously, or employ equivalence testing with strict tolerance levels to ensure that the deletion of a confounder will not introduce significant bias. (S. Greenland 2007)

>> Addressing Bias from Measurement Error & Model Selection
  • Avoid controlling for colliders in regression models, as doing so can introduce negative bias known as M bias, even when there is no direct causal relationship between the collider and the exposure or outcome. (Liu et al. 2012)

  • Prioritize minimizing unmeasured confounding when selecting variables for adjustment, even if it means potentially conditioning on instrumental variables, as the increase in error due to conditioning on IVs is usually small compared to the total estimation error. (Myers et al. 2011)

  • Carefully consider the potential for selection bias when using restricted source populations in cohort studies, particularly when the exposure and risk factor are strongly associated with selection and the unmeasured risk factor is associated with the disease hazard ratio, as this can lead to significant bias in the estimated log odds ratio for the exposure-disease association. (Pizzi et al. 2010)

  • Include variables related to the outcome, regardless of your relationship to the exposure, in propensity score models to reduce variance and improve accuracy in estimating exposure effects. (Brookhart et al. 2006)

  • Carefully account for the impact of measurement error, especially when dealing with exposure variables constrained by a lower limit, as it can introduce significant bias in estimates of exposure-disease associations and alter your interpretation. (D. B. Richardson 2003)

>> Sensitivity Analyses for Unmeasured Confounding
  • Perform sensitivity analyses to evaluate the robustness of your findings to potential unmeasured confounding variables, particularly when measured confounders have already been controlled for in the statistical analysis. (Groenwold et al. 2009)
>> Confounder Selection Strategies and Adjustment Trade-Offs
  • Use varying cutoff values when applying the change-in-estimate criterion for confounder selection depending on the effect size of the exposure-outcome relationship, sample size, SD of the regression error, and exposure-confounder correlation, rather than relying solely on the commonly used 10% cutoff. (P. H. Lee 2014a)

  • Carefully consider the trade-offs involved in adjusting for potential confounders, particularly when empirical and theoretical criteria yield contradictory results, as unnecessary adjustments can increase the risk of bias and reduce statistical power, while failure to adjust for true confounders can lead to biased estimates of exposure-outcome associations. (P. H. Lee 2014b)

> Improving Estimation Techniques under Complex Data Structures

>> Robust Variance Estimators for Clustered Correlated Data
  • Employ the robust between-cluster variance estimator for analyzing cluster-correlated data, as it provides an unbiased estimate of the variance of a linear statistic even in cases of heteroscedasticity and complex dependence structures within clusters, provided that observations between clusters are uncorrelated. (Williams 2000)
>> Incorporating Unexposed Clusters in LSDV Analysis for SW-CRTs
  • Include unexposed clusters in your fixed effects least squares dummy variable (LSDV) analysis of stepped-wedge cluster randomized trials (SW-CRTs) because doing so improves the precision of the intervention effect estimator, even if the assumptions of constant residual variance and period effects are violated. (Hussey and Hughes 2007)
>> Fixed vs Random Effect Models in Meta Analysis
  • Carefully consider whether your meta-analysis requires a fixed-effect or random-effects model, taking into account the assumption of homogeneous versus varying true effect sizes across studies, and recognizing that these choices impact the calculation of pooled estimates, study weights, and confidence intervals. (Dettori, Norvell, and Chapman 2022)
>> Variance Control for Consistent Common Mean Estimators
  • Ensure the variance of your estimates does not grow too quickly relative to the sample size, specifically that the sum of the inverse of the variances must tend towards infinity as the sample size grows, in order to guarantee the consistency of the common mean estimator used in fixed effects meta-analysis. (Taketomi and Emura 2023)

> Improving Analysis Techniques for Reliable Results

>> Utilizing Bayesian Hierarchical Models for Multi-Level Data
  • Use Bayesian Hierarchical Models (BHMs) to analyze data from complex structures, such as multi-level studies, because they provide more accurate and powerful estimates by incorporating information from all levels of the hierarchy through shrinkage estimation, while also accounting for both within- and across-group variability. (NA?)
>> Measuring Inter-Observer Agreement using Multi-Dimensional Contingency Tables
  • Employ a unified approach to evaluating observer agreement for categorical data by expressing the degree of agreement among observers as functions of observed proportions derived from underlying multi-dimensional contingency tables, which can then be used to construct test statistics for relevant hypotheses regarding inter-observer bias and agreement on individual subject classifications. (Bangdiwala 2017)
>> Limitations of Uniform Distribution Assumptions in Baseline Analyses
  • Use caution when interpreting baseline p-values derived from rounded summary statistics, as your distribution differs from the uniform distribution expected under randomization, while randomization methods, non-normality, and correlation of baseline variables do not significantly impact the distribution of baseline p-values. (Bolland et al. 2019)

  • Avoid using the uniform distribution of p-values as a check for valid randomization, especially when dealing with non-normal distributions, correlated variables, or binary data using chi-square or Fishers exact tests. (Bland 2013)

>> Adjustment for Prognostic Covariates in Randomized Trials
  • Consider adjusting for known prognostic covariates in the analysis of randomized trials, as it can lead to significant increases in power, while the potential benefits of including a small number of possibly prognostic covariates in trials with moderate or large sample sizes outweigh the risks of decreasing power. (Egbewale, Lewis, and Sim 2014)
>> Multiple Testing & Outcome Measures Considerations
  • Carefully consider the trade-offs between Type I and Type II errors when conducting multiple outcome measure studies, and communicate these potential consequences to your readers. (Dhiman et al. 2023)

  • Carefully consider your choice of family-wise error rate (FWER) control method, such as Bonferroni correction, when conducting multiple hypothesis tests, taking into account factors like independence assumptions, test family definitions, and whether the study is confirmatory or exploratory. (Ranstam 2016)

>> Q Values for Transparent Effect Size Interpretation

Addressing Data Complexities Enhances Model Performance

> Innovations in Spatial Analysis & Non-Markov Models

>> Leveraging Spatial Correlation for Unbiased Density Estimates
  • Embrace spatial correlation in count data as informative about individual distribution and density, rather than viewing it as an inferential obstacle. (Chandler and Royle 2013)

  • Incorporate spatial information into your capture-recapture models to improve the accuracy of density estimates, as traditional methods that ignore spatial structure can lead to biased results. (Borchers and Efford 2008)

>> Non-Markov Transition, Multinomial Birth, Reverse Capture Analysis, and Bayesian Survival
  • Consider using Bayesian approaches for analyzing animal survival data, particularly for band-return and open population recapture models, as they offer a convenient framework for model-averaging and incorporating uncertainty due to model selection into the inference process. (S. P. Brooks, Catchpole, and Morgan 2000)

  • Consider analyzing capture-mark-recapture data in reverse order to investigate recruitment and population growth rate, rather than solely focusing on survival analysis. (Pradel 1996)

  • Consider models that allow for non-Markovian transitions, as they can better account for dependencies on previous states and improve the accuracy of estimates compared to assuming Markovian transitions. (Brownie et al. 1993)

  • Consider using a generalized Jolly-Seber model that represents births through a multinomial distribution from a super-population, allowing for easier numerical optimization and the ability to impose constraints on model parameters. (Coltheart et al. 1993)

>> Laplace Approximation Boosts Efficiency in Mixed Survival Models
  • Consider using Laplace approximation for Bayesian inference in mixed survival models, particularly when dealing with complex models or large datasets, as it provides a computationally efficient alternative to algebraic integration or Monte Carlo simulations while maintaining accuracy. (Ducrocq and Casella 1996)

> Addressing Overdispersion, Zero Inflation, and Imperfect Detection

>> Addressing Overdispersion in Binomial Data
  • Carefully evaluate and address overdispersion in binomial data, as failure to do so can result in biased parameter estimates and incorrect conclusions, and potential solutions include using quasi-likelihood estimation, explicit modeling of sources of extra-binomial variation, or incorporating observation-level random effects. (Harrison 2015)

  • Carefully evaluate and address overdispersion in binomial data, as failure to do so can result in biased parameter estimates and incorrect conclusions, and potential solutions include using quasi-likelihood estimation, explicit modeling of sources of extra-binomial variation, or incorporating observation-level random effects. (NA?)

>> Addressing Count Data Challenges
  • Consider using a hierarchical Bayesian modeling approach, specifically an N-mixture model, to estimate abundance from temporally replicated counts of organisms in closed populations, as it allows for the explicit incorporation of detection probabilities and avoids issues related to sparse data and multiple comparisons. (Camp et al. 2023)

  • Feel comfortable using fewer than five levels of a random effects term in a mixed-effects model if they are primarily interested in estimating fixed effects parameters, as long as they are mindful of potential issues related to singular fits and reduced precision. (Gomes 2022)

  • Carefully consider the presence of excess zeros in count data, as these can lead to biased parameter estimates if ignored or treated as simple overdispersion, and that zero-inflated GLMs provide a useful framework for addressing this issue. (M. E. Brooks et al. 2017)

  • Carefully consider the impact of imperfect detection and zero inflation on your count data, and choose analytical methods accordingly, such as distance sampling or hierarchical (N-mixture) models, to ensure accurate estimation of population size. (Dénes, Silveira, and Beissinger 2015)

  • Consider using the proposed Poisson-link model instead of the traditional delta-model for analyzing biomass sampling data with many zeros, as it addresses three significant issues with the latter: difficulties in interpreting covariates, the assumption of independence between model components, and biologically implausible forms when removing covariates. (NA?)

>> Multiplicative Error Terms Boost Ecological Realism in Mixing Models
  • Consider using a multiplicative error term (Model 4) in your mixing models, as it allows for more flexibility in fitting narrow consumer data and provides an estimate of consumption rate, making it more ecologically realistic than assuming all variation in consumer tracer values is due to unexplained deviations from the mean (Model 2) or that consumers perfectly integrate or specialize in your feeding behavior (Models 1 and 3). (Stock and Semmens 2016)

> Transformations & Models Optimize Analysis of Biological Distributions

>> Optimal Data Transformation Techniques for Specific Analyses
  • Avoid using the arcsine transformation for analyzing binomial or non-binomial proportions in favor of logistic regression for binomial data and the logit transformation for non-binomial data, as these approaches offer improved interpretability, accuracy, and power. (Warton and Hui 2011)

  • Consider adding a constant of 0.5 to your data points before applying a logarithmic transformation to address heteroscedasticity in ANOVA tests of population abundance, as this approach better approximates a continuous distribution and leads to improved statistical power compared to traditional methods. (Yamamura 1999)

>> Rank-Abundance Plots Preserve Original Species Abundance Data
  • Avoid using logarithmic transformations when studying species abundance distributions, as it can introduce artificial internal modes and instead use rank-abundance plots, which preserve the original data and provide a clearer representation of the distribution. (Nekola et al. 2008)
>> Error distributions impact power-law analyses
  • Carefully consider the error distribution when choosing between linear regression on log-transformed data (LR) and nonlinear regression (NLR) for analyzing biological power-laws, as the choice of method affects the accuracy of parameter estimates and confidence intervals. (Packard, Birchard, and Boardman 2010)

  • Consider a wider range of statistical models beyond the traditional allometric method and standard nonlinear regression, and validate your chosen model through graphical analysis on the original arithmetic scale. (NA?)

> Optimizing Analysis Techniques for Robust Inference

>> Simplifying Analyses & Planning Comparisons Boost Research Quality
  • Prioritize planned comparisons over unplanned ones, and choose the appropriate multiple comparisons test based on the specific characteristics of your data and research questions, such as sample size, number of groups, and whether the data is parametric or non-parametric. (Midway et al. 2020)

  • Avoid unnecessary complexity in your data analysis by focusing on the key experimental or observational units in a study and using a simple, specialized framework instead of a very general one, as this leads to clearer explanations, fewer computational mistakes, and greater consistency across different analysts. (Qian and Shen 2007)

>> Improving Wildlife Count Accuracy via Robust Statistics
  • Carefully consider and address potential sources of error in your wildlife counts, such as availability bias, detection bias, and miscounting, by employing robust statistical methods and validated field sampling techniques. (Elphick 2008)
>> Simulation-Based Approaches for Non-Nested Models Comparison
  • Consider using the likelihood ratio test (LRT) for comparing both nested and non-nested statistical models, as modern computational power allows for simulation-based approaches to overcome previous difficulties in obtaining the distribution of the LRT statistic under the null hypothesis for non-nested models. (Lewis, Butler, and Gilbert 2010)
>> Balancing Null Hypothesis Testing with Alternative Approaches
  • Abandon the use of p-values and null hypothesis significance testing in favor of information-theoretic approaches that enable the computation of post-data quantities such as model likelihoods and evidence ratios, allowing for formal inferences to be made based on all the models in an a priori set while avoiding conditioning on the null hypothesis. (Burnham and Anderson 2014)

  • Avoid dogmatically choosing either P values, confidence intervals, or information-theoretic criteria as your primary statistical tool, and instead select the most appropriate metric based on the specific details of each individual application. (Murtaugh 2014)

  • Carefully choose null models and corresponding metrics to ensure they accurately capture the desired properties of the null hypothesis, while balancing the need for sufficient constraints to maintain statistical power and minimize Type II errors. (Gotelli and Ulrich 2011)

> Integrating Multiple Datasets for Demographic Analysis

>> Integrated Population Models Estimate Life History Parameters
  • Consider employing Integrated Population Models (IPMs) to estimate life history demographic rates and population abundance using multiple data sets, resolving discrepancies among individual analyses and providing insights into the contributions of life stages or environmental factors to population trends. (Zipkin, Inouye, and Beissinger 2019)

> Statistical Techniques Tailored for Ecological and Spatial Studies

>> Binomial Analysis & Autocorrelation Models for Skewed Data
  • Consider using a symmetric power link function when analyzing binomial data, as it offers greater flexibility in handling skewness compared to traditional link functions such as logit, probit, and cloglog. (Jiang et al. 2013)
>> Statistical Models Adapted for Ecological Research
  • Consider using hierarchical Bayesian models for analyzing multivariate abundance data in ecology because they allow for the integration of multiple ecological processes, provide a clear data-generating process and likelihood function, enable straightforward detection of assumptions made in the analysis, and offer more accurate predictions and comparisons of models. (Hui 2016)

  • Consider using generalized linear models (GLMs) and generalized additive models (GAMs) in ecological studies, as these models offer greater flexibility in handling non-normal error structures and non-constant variance compared to traditional linear models, allowing for more accurate representation of ecological relationships. (Guisan, Edwards, and Hastie 2002)

>> Bayesian Data Augmentation Overcomes Computational Limitations in Biogeography
  • Consider using a Bayesian data-augmentation approach to overcome computational limitations in analyzing large numbers of geographic areas in historical biogeography studies. (Landis et al. 2013)
>> Addressing Unbalanced Data & Extreme Events in Regression Models
  • Leverage a combination of marked point processes and extreme-value theory to accurately model the distribution of large wildfires, while borrowing strength from the estimation of nonextreme wildfires to improve the prediction of larger fires and account for changes in extreme fire activity. (Koh et al. 2023)

  • Carefully consider the impact of unbalanced data on the statistical properties of logistic regression models, particularly in terms of bias and variance, as well as on the prediction capabilities of the model, and take appropriate measures to address any potential issues. (Salas-Eljatib et al. 2018)

> Species Distribution Models: Presence-Only vs Presence-Abence Data

>> Species Distribution Models: Presence-Only Data Challenges & Solutions
  • Avoid using Maxent for species distribution modeling due to its reliance on poorly defined indices, and instead utilize formal model-based inference methods that allow for direct estimation of occurrence probabilities from presence-only data under the assumptions of random sampling and constant probability of species detection. (Royle et al. 2012)

  • Carefully select and validate default settings for species distribution models like Maxent, particularly when dealing with presence-only data, to ensure optimal predictive accuracy without requiring extensive parameter tuning for each species or dataset. (Phillips and Dudík 2008)

> Enhancing Estimation and Comparison Techniques Across Disciplines

>> Two-Parameter Models Simplify Tree Height-Diameter Relationship
  • Consider using two-parameter models in a limited form, specifically the Naslunds equation, for estimating the relationship between tree height and diameter due to its simplicity, statistical significance, and superior performance compared to more complex models. (Dubenok et al. 2023)

> Considerations for Appropriate Analysis Techniques

>> Species Distribution Models: Selecting Suitable Pseudo-Absences
  • Carefully consider the choice of pseudo-absence points in species distribution models, adhering to guidelines such as limiting the spatial extent to conditions within the species ecological tolerance, not excluding pseudo-absence points from known occurrence areas, and ensuring that the training area reflects the space accessible to the species. (NA?)
>> Bayesian Inference & Scale Mismatch in Multisource Data
  • Carefully consider the potential impact of scale mismatch and spatiotemporal variability when analyzing data from multiple sources, as these factors can lead to inconsistencies in inferences about population trends. (Saunders et al. 2019)

  • Consider employing Bayesian statistical methods in conjunction with the BACIPS (Before-After Control-Impact Paired Series) design to improve the interpretability and accuracy of your findings, especially when communicating results to non-technical stakeholders. (NA?)

References

Alexander, Monica, and Leontine Alkema. 2018. “Global Estimation of Neonatal Mortality Using a Bayesian Hierarchical Splines Regression Model.” Demographic Research 38 (January). https://doi.org/10.4054/demres.2018.38.15.
Alkema, Leontine, and Jin Rou New. 2014. “Global Estimation of Child Mortality Using a Bayesian b-Spline Bias-Reduction Model.” The Annals of Applied Statistics 8 (December). https://doi.org/10.1214/14-aoas768.
Allison, Paul D., and Nicholas A. Christakis. 2006. “Fixed-Effects Methods for the Analysis of Nonrepeated Events.” Sociological Methodology 36 (August). https://doi.org/10.1111/j.1467-9531.2006.00177.x.
Austin, Peter C. 2006. “The Performance of Different Propensity Score Methods for Estimating Marginal Odds Ratios.” Statistics in Medicine 26 (December). https://doi.org/10.1002/sim.2781.
Bagiella, Emilia, Tara Karamlou, Helena Chang, and John Spivack. 2015. “Instrumental Variable Methods in Clinical Research.” The Journal of Thoracic and Cardiovascular Surgery 150 (October). https://doi.org/10.1016/j.jtcvs.2015.07.056.
Bangdiwala, Shrikant I. 2017. “Graphical Aids for Visualizing and Interpreting Patterns in Departures from Agreement in Ordinal Categorical Observer Agreement Data.” Journal of Biopharmaceutical Statistics 27 (February). https://doi.org/10.1080/10543406.2016.1273941.
Bland, Martin. 2013. “Do Baseline p-Values Follow a Uniform Distribution in Randomised Trials?” PLoS ONE 8 (October). https://doi.org/10.1371/journal.pone.0076010.
Bolland, Mark J., Greg D. Gamble, Alison Avenell, and Andrew Grey. 2019. “Rounding, but Not Randomization Method, Non-Normality, or Correlation, Affected Baseline p-Value Distributions in Randomized Trials.” Journal of Clinical Epidemiology 110 (June). https://doi.org/10.1016/j.jclinepi.2019.03.001.
Borchers, D. L., and M. G. Efford. 2008. “Spatially Explicit Maximum Likelihood Methods for Capture–Recapture Studies.” Biometrics 64 (June). https://doi.org/10.1111/j.1541-0420.2007.00927.x.
Brookhart, M. Alan, and Sebastian Schneeweiss. 2007. “Preference-Based Instrumental Variable Methods for the Estimation of Treatment Effects: Assessing Validity and Interpreting Results.” The International Journal of Biostatistics 3 (January). https://doi.org/10.2202/1557-4679.1072.
Brookhart, M. Alan, Sebastian Schneeweiss, Kenneth J. Rothman, Robert J. Glynn, Jerry Avorn, and Til Stürmer. 2006. “Variable Selection for Propensity Score Models.” American Journal of Epidemiology 163 (April). https://doi.org/10.1093/aje/kwj149.
Brooks, Mollie E., Kasper Kristensen, Koen J. van Benthem, Arni Magnusson, Casper W. Berg, Anders Nielsen, Hans J. Skaug, Martin Mächler, and Benjamin M. Bolker. 2017. “Modeling Zero-Inflated Count Data with glmmTMB,” May. https://doi.org/10.1101/132753.
Brooks, S. P., E. A. Catchpole, and B. J. T. Morgan. 2000. “Bayesian Animal Survival Estimation.” Statistical Science 15 (November). https://doi.org/10.1214/ss/1009213003.
Brownie, C., J. E. Hines, J. D. Nichols, K. H. Pollock, and J. B. Hestbeck. 1993. “Capture-Recapture Studies for Multiple Strata Including Non-Markovian Transitions.” Biometrics 49 (December). https://doi.org/10.2307/2532259.
Burnham, K. P., and D. R. Anderson. 2014. “<I>p</i> Values Are Only an Index to Evidence: 20th‐ Vs. 21st‐century Statistical Science.” Ecology 95 (March). https://doi.org/10.1890/13-1066.1.
Cai, Bing, Dylan S. Small, and Thomas R. Ten Have. 2011. “Two‐stage Instrumental Variable Methods for Estimating the Causal Odds Ratio: Analysis of Bias.” Statistics in Medicine 30 (April). https://doi.org/10.1002/sim.4241.
Camp, Richard J., Chauncey K. Asing, Paul C. Banko, Lainie Berry, Kevin W. Brinck, Chris Farmer, and Ayesha S. Genz. 2023. “Evaluation of Replicate Sampling Using Hierarchical Spatial Modeling of Population Surveys Accounting for Imperfect Detectability.” Wildlife Society Bulletin 47 (July). https://doi.org/10.1002/wsb.1471.
Carter, David B., and Curtis S. Signorino. 2010. “Back to the Future: Modeling Time Dependence in Binary Data.” Political Analysis 18. https://doi.org/10.1093/pan/mpq013.
Cepeda, M. S. 2003. “Comparison of Logistic Regression Versus Propensity Score When the Number of Events Is Low and There Are Multiple Confounders.” American Journal of Epidemiology 158 (August). https://doi.org/10.1093/aje/kwg115.
Chandler, Richard B., and J. Andrew Royle. 2013. “Spatially Explicit Models for Inference about Density in Unmarked or Partially Marked Populations.” The Annals of Applied Statistics 7 (June). https://doi.org/10.1214/12-aoas610.
Cole, S. R., and M. A. Hernan. 2008. “Constructing Inverse Probability Weights for Marginal Structural Models.” American Journal of Epidemiology 168 (July). https://doi.org/10.1093/aje/kwn164.
Cole, Stephen R, and Miguel A Hernán. 2002. “Fallibility in Estimating Direct Effects.” International Journal of Epidemiology 31 (February). https://doi.org/10.1093/ije/31.1.163.
Coltheart, Max, Brent Curtis, Paul Atkins, and Micheal Haller. 1993. “Models of Reading Aloud: Dual-Route and Parallel-Distributed-Processing Approaches.” Psychological Review 100 (October). https://doi.org/10.1037/0033-295x.100.4.589.
Dawid, A. Philip, Monica Musio, and Stephen E. Fienberg. 2016. “From Statistical Evidence to Evidence of Causality.” Bayesian Analysis 11 (September). https://doi.org/10.1214/15-ba968.
Dénes, Francisco V., Luís Fábio Silveira, and Steven R. Beissinger. 2015. “Estimating Abundance of Unmarked Animal Populations: Accounting for Imperfect Detection and Other Sources of Zero Inflation.” Methods in Ecology and Evolution 6 (January). https://doi.org/10.1111/2041-210x.12333.
Dettori, Joseph R., Daniel C. Norvell, and Jens R. Chapman. 2022. “Fixed-Effect Vs Random-Effects Models for Meta-Analysis: 3 Points to Consider.” Global Spine Journal 12 (June). https://doi.org/10.1177/21925682221110527.
Dhiman, Paula, Jie Ma, Cathy Qi, Garrett Bullock, Jamie C Sergeant, Richard D Riley, and Gary S Collins. 2023. “Sample Size Requirements Are Not Being Considered in Studies Developing Prediction Models for Binary Outcomes: A Systematic Review.” BMC Medical Research Methodology 23 (August). https://doi.org/10.1186/s12874-023-02008-1.
Dubenok, N N, A V Lebedev, V V Gostev, A V Gemonov, and V M Gradusov. 2023. “Height-Diameter Fixed Effects Models for the Pine in European Russia.” IOP Conference Series: Earth and Environmental Science 1154 (March). https://doi.org/10.1088/1755-1315/1154/1/012025.
Ducrocq, V, and G Casella. 1996. “A Bayesian Analysis of Mixed Survival Models.” Genetics Selection Evolution 28 (December). https://doi.org/10.1186/1297-9686-28-6-505.
Egbewale, Bolaji E, Martyn Lewis, and Julius Sim. 2014. “Bias, Precision and Statistical Power of Analysis of Covariance in the Analysis of Randomized Trials with Baseline Imbalance: A Simulation Study.” BMC Medical Research Methodology 14 (April). https://doi.org/10.1186/1471-2288-14-49.
Elphick, Chris S. 2008. “How You Count Counts: The Importance of Methods Research in Applied Ecology.” Journal of Applied Ecology 45 (August). https://doi.org/10.1111/j.1365-2664.2008.01545.x.
Fewell, Z., G. Davey Smith, and J. A. C. Sterne. 2007. “The Impact of Residual and Unmeasured Confounding in Epidemiologic Studies: A Simulation Study.” American Journal of Epidemiology 166 (June). https://doi.org/10.1093/aje/kwm165.
Garabedian, Laura Faden, Paula Chu, Sengwee Toh, Alan M. Zaslavsky, and Stephen B. Soumerai. 2014. “Potential Bias of Instrumental Variable Analyses for Observational Comparative Effectiveness Research.” Annals of Internal Medicine 161 (July). https://doi.org/10.7326/m13-1887.
Gluud, Lise Lotte. 2006. “Bias in Clinical Intervention Research.” American Journal of Epidemiology 163 (January). https://doi.org/10.1093/aje/kwj069.
Gomes, Dylan G. E. 2022. “Should i Use Fixed Effects or Random Effects When i Have Fewer Than Five Levels of a Grouping Factor in a Mixed-Effects Model?” PeerJ 10 (January). https://doi.org/10.7717/peerj.12794.
Gotelli, Nicholas J., and Werner Ulrich. 2011. “Statistical Challenges in Null Model Analysis.” Oikos 121 (November). https://doi.org/10.1111/j.1600-0706.2011.20301.x.
Grabich, Shannon C., Whitney R. Robinson, Stephanie M. Engel, Charles E. Konrad, David B. Richardson, and Jennifer A. Horney. 2015. “County-Level Hurricane Exposure and Birth Rates: Application of Difference-in-Differences Analysis for Confounding Control.” Emerging Themes in Epidemiology 12 (December). https://doi.org/10.1186/s12982-015-0042-7.
Greenland, S. 2007. “Invited Commentary: Variable Selection Versus Shrinkage in the Control of Multiple Confounders.” American Journal of Epidemiology 167 (December). https://doi.org/10.1093/aje/kwm355.
Greenland, Sander. 1990. “Randomization, Statistics, and Causal Inference.” Epidemiology 1 (November). https://doi.org/10.1097/00001648-199011000-00003.
———. 2000. “An Introduction to Instrumental Variables for Epidemiologists.” International Journal of Epidemiology 29 (August). https://doi.org/10.1093/ije/29.4.722.
Groenwold, Rolf H H, David B Nelson, Kristin L Nichol, Arno W Hoes, and Eelko Hak. 2009. “Sensitivity Analyses to Estimate the Potential Impact of Unmeasured Confounding in Causal Research.” International Journal of Epidemiology 39 (November). https://doi.org/10.1093/ije/dyp332.
Guisan, Antoine, Thomas C Edwards, and Trevor Hastie. 2002. “Generalized Linear and Generalized Additive Models in Studies of Species Distributions: Setting the Scene.” Ecological Modelling 157 (November). https://doi.org/10.1016/s0304-3800(02)00204-1.
Harrison, Xavier A. 2015. “A Comparison of Observation-Level Random Effect and Beta-Binomial Models for Modelling Overdispersion in Binomial Data in Ecology &Amp; Evolution.” PeerJ 3 (July). https://doi.org/10.7717/peerj.1114.
Hausman, Jerry. 2001. “Mismeasured Variables in Econometric Analysis: Problems from the Right and Problems from the Left.” Journal of Economic Perspectives 15 (November). https://doi.org/10.1257/jep.15.4.57.
Howards, P. P., E. F. Schisterman, C. Poole, J. S. Kaufman, and C. R. Weinberg. 2012. “"Toward a Clearer Definition of Confounding" Revisited with Directed Acyclic Graphs.” American Journal of Epidemiology 176 (August). https://doi.org/10.1093/aje/kws127.
Hui, Francis K. C. 2016. “<Scp>boral</Scp> – Bayesian Ordination and Regression Analysis of Multivariate Abundance Data in <Scp>r</Scp>.” Methods in Ecology and Evolution 7 (January). https://doi.org/10.1111/2041-210x.12514.
Hussey, Michael A., and James P. Hughes. 2007. “Design and Analysis of Stepped Wedge Cluster Randomized Trials.” Contemporary Clinical Trials 28 (February). https://doi.org/10.1016/j.cct.2006.05.007.
Iwashyna, Theodore J., and Edward H. Kennedy. 2013. “Instrumental Variable Analyses. Exploiting Natural Randomness to Understand Causal Mechanisms.” Annals of the American Thoracic Society 10 (June). https://doi.org/10.1513/annalsats.201303-054fr.
Jiang, Xun, Dipak K. Dey, Rachel Prunier, Adam M. Wilson, and Kent E. Holsinger. 2013. “A New Class of Flexible Link Functions with Application to Species Co-Occurrence in Cape Floristic Region.” The Annals of Applied Statistics 7 (December). https://doi.org/10.1214/13-aoas663.
Koh, Jonathan, François Pimont, Jean-Luc Dupuy, and Thomas Opitz. 2023. “Spatiotemporal Wildfire Modeling Through Point Processes with Moderate and Extreme Marks.” The Annals of Applied Statistics 17 (March). https://doi.org/10.1214/22-aoas1642.
Landis, Michael J., Nicholas J. Matzke, Brian R. Moore, and John P. Huelsenbeck. 2013. “Bayesian Analysis of Biogeography When the Number of Areas Is Large.” Systematic Biology 62 (July). https://doi.org/10.1093/sysbio/syt040.
Lee, Paul H. 2014a. “Is a Cutoff of 10.” Journal of Epidemiology 24. https://doi.org/10.2188/jea.je20130062.
———. 2014b. “Should We Adjust for a Confounder If Empirical and Theoretical Criteria Yield Contradictory Results? A Simulation Study.” Scientific Reports 4 (August). https://doi.org/10.1038/srep06085.
Lee, Ronald D., and Lawrence R. Carter. 1992. “Modeling and Forecasting u.s. Mortality.” Journal of the American Statistical Association 87 (September). https://doi.org/10.1080/01621459.1992.10475265.
Lewis, Fraser, Adam Butler, and Lucy Gilbert. 2010. “A Unified Approach to Model Selection Using the Likelihood Ratio Test.” Methods in Ecology and Evolution 2 (August). https://doi.org/10.1111/j.2041-210x.2010.00063.x.
Liu, Wei, M. Alan Brookhart, Sebastian Schneeweiss, Xiaojuan Mi, and Soko Setoguchi. 2012. “Implications of m Bias in Epidemiologic Studies: A Simulation Study.” American Journal of Epidemiology 176 (October). https://doi.org/10.1093/aje/kws165.
Lousdal, Mette Lise. 2018. “An Introduction to Instrumental Variable Assumptions, Validation and Estimation.” Emerging Themes in Epidemiology 15 (January). https://doi.org/10.1186/s12982-018-0069-7.
Lunt, M., D. Solomon, K. Rothman, R. Glynn, K. Hyrich, D. P. M. Symmons, and T. Sturmer. 2009. “Different Methods of Balancing Covariates Leading to Different Effect Estimates in the Presence of Effect Modification.” American Journal of Epidemiology 169 (January). https://doi.org/10.1093/aje/kwn391.
Mansson, R., M. M. Joffe, W. Sun, and S. Hennessy. 2007. “On the Estimation and Use of Propensity Scores in Case-Control and Case-Cohort Studies.” American Journal of Epidemiology 166 (May). https://doi.org/10.1093/aje/kwm069.
Marshall, J. R., and J. L. Hastrup. 1996. “Mismeasurement and the Resonance of Strong Confounders: Uncorrelated Errors.” American Journal of Epidemiology 143 (May). https://doi.org/10.1093/oxfordjournals.aje.a008671.
McGrath, Liam F. 2015. “Estimating Onsets of Binary Events in Panel Data.” Political Analysis 23. https://doi.org/10.1093/pan/mpv019.
Midway, Stephen, Matthew Robertson, Shane Flinn, and Michael Kaller. 2020. “Comparing Multiple Comparisons: Practical Guidance for Choosing the Best Multiple Comparisons Test.” PeerJ 8 (December). https://doi.org/10.7717/peerj.10387.
Murtaugh, Paul A. 2014. “In Defense of <i>p</i> Values.” Ecology 95 (March). https://doi.org/10.1890/13-0590.1.
Myers, Jessica A., Jeremy A. Rassen, Joshua J. Gagne, Krista F. Huybrechts, Sebastian Schneeweiss, Kenneth J. Rothman, Marshall M. Joffe, and Robert J. Glynn. 2011. “Effects of Adjusting for Instrumental Variables on Bias and Precision of Effect Estimates.” American Journal of Epidemiology 174 (October). https://doi.org/10.1093/aje/kwr364.
Nekola, Jeffrey C., Arnošt L. Šizling, Alison G. Boyer, and David Storch. 2008. “Artifactions in the Log-Transformation of Species Abundance Distributions.” Folia Geobotanica 43 (September). https://doi.org/10.1007/s12224-008-9020-y.
Oliveira, Guilherme Lopes de, Raffaele Argiento, Rosangela Helena Loschi, Renato Martins Assunção, Fabrizio Ruggeri, and Márcia D’Elia Branco. 2022. “Bias Correction in Clustered Underreported Data.” Bayesian Analysis 17 (March). https://doi.org/10.1214/20-ba1244.
Packard, Gary C., Geoffrey F. Birchard, and Thomas J. Boardman. 2010. “Fitting Statistical Models in Bivariate Allometry.” Biological Reviews 86 (October). https://doi.org/10.1111/j.1469-185x.2010.00160.x.
Patel, Chirag J., Belinda Burford, and John P. A. Ioannidis. 2015. “Assessment of Vibration of Effects Due to Model Specification Can Demonstrate the Instability of Observational Associations.” Journal of Clinical Epidemiology 68 (September). https://doi.org/10.1016/j.jclinepi.2015.05.029.
Pearl, J. 2011. “Invited Commentary: Understanding Bias Amplification.” American Journal of Epidemiology 174 (October). https://doi.org/10.1093/aje/kwr352.
Phillips, Steven J., and Miroslav Dudík. 2008. “Modeling of Species Distributions with Maxent: New Extensions and a Comprehensive Evaluation.” Ecography 31 (March). https://doi.org/10.1111/j.0906-7590.2008.5203.x.
Pizzi, C., B. De Stavola, F. Merletti, R. Bellocco, I. dos Santos Silva, N. Pearce, and L. Richiardi. 2010. “Sample Selection and Validity of Exposure-Disease Association Estimates in Cohort Studies.” Journal of Epidemiology &Amp; Community Health 65 (September). https://doi.org/10.1136/jech.2009.107185.
Pradel, R. 1996. “Utilization of Capture-Mark-Recapture for the Study of Recruitment and Population Growth Rate.” Biometrics 52 (June). https://doi.org/10.2307/2532908.
Preston, Samuel H., and Haidong Wang. 2006. “Sex Mortality Differences in the United States: The Role of Cohort Smoking Patterns.” Demography 43 (November). https://doi.org/10.1353/dem.2006.0037.
“Proceedings of Third International Conference on Sustainable Expert Systems.” 2023. Lecture Notes in Networks and Systems. https://doi.org/10.1007/978-981-19-7874-6.
Qian, Song S., and Zehao Shen. 2007. “ECOLOGICAL APPLICATIONS OF MULTILEVEL ANALYSIS OF VARIANCE.” Ecology 88 (October). https://doi.org/10.1890/06-2041.1.
Ranstam, J. 2016. “Multiple p -Values and Bonferroni Correction.” Osteoarthritis and Cartilage 24 (May). https://doi.org/10.1016/j.joca.2016.01.008.
Rassen, J. A., S. Schneeweiss, R. J. Glynn, M. A. Mittleman, and M. A. Brookhart. 2008. “Instrumental Variable Analysis for Estimation of Treatment Effects with Dichotomous Outcomes.” American Journal of Epidemiology 169 (November). https://doi.org/10.1093/aje/kwn299.
Reichenheim, Michael E, and Evandro SF Coutinho. 2010. “Measures and Models for Causal Inference in Cross-Sectional Studies: Arguments for the Appropriateness of the Prevalence Odds Ratio and Related Logistic Regression.” BMC Medical Research Methodology 10 (July). https://doi.org/10.1186/1471-2288-10-66.
Richardson, D. B. 2003. “Effects of Exposure Measurement Error When an Exposure Variable Is Constrained by a Lower Limit.” American Journal of Epidemiology 157 (February). https://doi.org/10.1093/aje/kwf217.
Richardson, David B., Ghassan B. Hamra, Richard F. MacLehose, Stephen R. Cole, and Haitao Chu. 2015a. “Hierarchical Regression for Analyses of Multiple Outcomes.” American Journal of Epidemiology 182 (July). https://doi.org/10.1093/aje/kwv047.
Richardson, David B, Alan C Kinlaw, Richard F MacLehose, and Stephen R Cole. 2015b. “Standardized Binomial Models for Risk or Prevalence Ratios and Differences.” International Journal of Epidemiology 44 (July). https://doi.org/10.1093/ije/dyv137.
Riebler, Andrea, Leonhard Held, and Håvard Rue. 2012. “Estimation and Extrapolation of Time Trends in Registry Data—Borrowing Strength from Related Populations.” The Annals of Applied Statistics 6 (March). https://doi.org/10.1214/11-aoas498.
Rijn, Marieke H. C. van, Anneke Bech, Jean Bouyer, and Jan A. J. G. van den Brand. 2017. “Statistical Significance Versus Clinical Relevance.” Nephrology Dialysis Transplantation, January. https://doi.org/10.1093/ndt/gfw385.
Royle, J. Andrew, Richard B. Chandler, Charles Yackulic, and James D. Nichols. 2012. “Likelihood Analysis of Species Occurrence Probability from Presence‐only Data for Modelling Species Distributions.” Methods in Ecology and Evolution 3 (January). https://doi.org/10.1111/j.2041-210x.2011.00182.x.
Salas-Eljatib, Christian, Andres Fuentes-Ramirez, Timothy G. Gregoire, Adison Altamirano, and Valeska Yaitul. 2018. “A Study on the Effects of Unbalanced Data When Fitting Logistic Regression Models in Ecology.” Ecological Indicators 85 (February). https://doi.org/10.1016/j.ecolind.2017.10.030.
Sartwell, Philip E., and Frances Stark. 1991. “American Journal of Epidemiology:its Evolution Since 1965.” American Journal of Epidemiology 134 (November). https://doi.org/10.1093/oxfordjournals.aje.a116004.
Saunders, Sarah P., Matthew T. Farr, Alexander D. Wright, Christie A. Bahlai, Jose W. Ribeiro, Sam Rossman, Allison L. Sussman, Todd W. Arnold, and Elise F. Zipkin. 2019. “Disentangling Data Discrepancies with Integrated Population Models.” Ecology 100 (May). https://doi.org/10.1002/ecy.2714.
Schooling, C. Mary, and Heidi E. Jones. 2018. “Clarifying Questions about ‘Risk Factors’: Predictors Versus Explanation.” Emerging Themes in Epidemiology 15 (August). https://doi.org/10.1186/s12982-018-0080-z.
Schulz, Kenneth F. 1995. “Empirical Evidence of Bias.” JAMA 273 (February). https://doi.org/10.1001/jama.1995.03520290060030.
Skrondal, A. 2003. “Interaction as Departure from Additivity in Case-Control Studies: A Cautionary Note.” American Journal of Epidemiology 158 (August). https://doi.org/10.1093/aje/kwg113.
Stel, V. S., F. W. Dekker, C. Zoccali, and K. J. Jager. 2012. “Instrumental Variable Analysis.” Nephrology Dialysis Transplantation 28 (July). https://doi.org/10.1093/ndt/gfs310.
Stock, Brian C., and Brice X. Semmens. 2016. “Unifying Error Structures in Commonly Used Biotracer Mixing Models.” Ecology 97 (September). https://doi.org/10.1002/ecy.1517.
Tai, Bee‐Choo, David Machin, Ian White, and Val Gebski. 2001. “Competing Risks Analysis of Patients with Osteosarcoma: A Comparison of Four Different Approaches.” Statistics in Medicine 20 (February). https://doi.org/10.1002/sim.711.
Taketomi, Nanami, and Takeshi Emura. 2023. “Consistency of the Estimator for the Common Mean in Fixed-Effect Meta-Analyses.” Axioms 12 (May). https://doi.org/10.3390/axioms12050503.
“The Racial Record of Johns Hopkins University.” 1999. The Journal of Blacks in Higher Education. https://doi.org/10.2307/2999371.
VanderWeele, Tyler J. 2019. “Principles of Confounder Selection.” European Journal of Epidemiology 34 (March). https://doi.org/10.1007/s10654-019-00494-6.
Vansteelandt, Stijn, Maarten Bekaert, and Gerda Claeskens. 2010. “On Model Selection and Model Misspecification in Causal Inference.” Statistical Methods in Medical Research 21 (November). https://doi.org/10.1177/0962280210387717.
Warton, David I., and Francis K. C. Hui. 2011. “The Arcsine Is Asinine: The Analysis of Proportions in Ecology.” Ecology 92 (January). https://doi.org/10.1890/10-0340.1.
Westreich, D., and S. Greenland. 2013. “The Table 2 Fallacy: Presenting and Interpreting Confounder and Modifier Coefficients.” American Journal of Epidemiology 177 (January). https://doi.org/10.1093/aje/kws412.
Wilcox, A. J., C. R. Weinberg, and O. Basso. 2011. “On the Pitfalls of Adjusting for Gestational Age at Birth.” American Journal of Epidemiology 174 (September). https://doi.org/10.1093/aje/kwr230.
Williams, Rick L. 2000. “A Note on Robust Variance Estimation for Cluster‐correlated Data.” Biometrics 56 (June). https://doi.org/10.1111/j.0006-341x.2000.00645.x.
Yamamura, Kohji. 1999. “Transformation Using (<i>x</i> + 0.5) to Stabilize the Variance of Populations.” Population Ecology 41 (December). https://doi.org/10.1007/s101440050026.
Zhang, Zhongheng, Jaakko Reinikainen, Kazeem Adedayo Adeleke, Marcel E. Pieterse, and Catharina G. M. Groothuis-Oudshoorn. 2018. “Time-Varying Covariates and Coefficients in Cox Regression Models.” Annals of Translational Medicine 6 (April). https://doi.org/10.21037/atm.2018.02.12.
Zhao, Lue Ping, and Laurence N. Kolonel. 1992. “Efficiency Loss from Categorizing Quantitative Exposures into Qualitative Exposures in Case-Control Studies.” American Journal of Epidemiology 136 (August). https://doi.org/10.1093/oxfordjournals.aje.a116520.
Zipkin, Elise F., Brian D. Inouye, and Steven R. Beissinger. 2019. “Innovations in Data Integration for Modeling Populations.” Ecology 100 (June). https://doi.org/10.1002/ecy.2713.
Zubizarreta, José R., Dylan S. Small, Neera K. Goyal, Scott Lorch, and Paul R. Rosenbaum. 2013. “Stronger Instruments via Integer Programming in an Observational Study of Late Preterm Birth Outcomes.” The Annals of Applied Statistics 7 (March). https://doi.org/10.1214/12-aoas582.