Scientific Coordination
Alisa Remizova
Administrative Coordination
Janina Götsche
Please wait...
Synthesizing Evidence: Aggregating Support for your Hypothesis across Studies
About
Location:
Cologne / Unter Sachsenhausen 6-8
Cologne / Unter Sachsenhausen 6-8
General Topics:
Course Level:
Format:
Software used:
Duration:
Language:
Fees:
Students: 330 €
Academics: 495 €
Commercial: 990 €
Keywords
Additional links
Lecturer(s): Jessica Daikeler, Rebecca Kuiper
Course description
The hallmark of social and behavioral science is replication: for science to be robust, research must be replicated. Two types of replications can be distinguished: i) exact replications, which use homogeneous study-designs, meaning that the operationalization (and therewith the number) of variables and the statistical model are the same as in the original study; and ii) conceptual replications, with heterogeneous/diverse study-designs.
Exact replication can be feasible for experimental research, which offers a high degree of experimenter control, but may be hard to publish because of a presumed lack of originality. For other types, like observational studies, exact replication is hardly realizable, and such studies are typically conceptually replicated. Unfortunately, such statistical tools for aggregating evidence across conceptual replications are not commonly used yet, leaving researchers to rely on exact replications. Some researchers contend that meta-analysis should only be conducted on randomized controlled trials. If meta-analysis, the current ubiquitous approach for summarizing results, would be used to aggregate results from heterogeneous studies, the conclusions would be fundamentally unreliable because incomparable estimates are being averaged. The use of different designs in primary studies can be controlled for in meta-regression. Nevertheless, meta-regression aggregates estimates for studies with the same design and renders an overall estimate for each design type in the primary studies.
In conceptual replications, the central theory/hypothesis is the main characteristic. Evaluating hypotheses is ubiquitous in the behavioral, social, and biomedical sciences. One can use so-called 'evidence synthesis' or 'support aggregation' to combine the evidence for a hypothesis of interest, from both conceptual and exact replications, which we will discuss in this workshop.
Day 1: Collecting Primary Studies
The first step in conducting a meta-analysis or evidence synthesis is collecting primary studies, including replication studies and other studies investigating the same phenomenon or hypothesis. On Day 1, we will cover how to systematically review and collect these studies. This includes formulating a research question, defining eligibility criteria for including and excluding studies, conducting the literature search, screening studies, and performing study coding..
Day 2: Hypothesis Evaluation
On Day 2, we will focus on evaluating hypotheses using model selection methods. The focus lies on theory-based / informative hypotheses (alternative to null hypothesis). For example, one can evaluate if Medicine A is more effective (e.g., increases happiness) than Medicine B, which in turn is more effective than a placebo (in an ANOVA model: µA > µB > µPlacebo). Alternatively, one can examine if the number of children is a stronger predictor of happiness than income and age (in a regression model with standardized parameters: βNoC > {βInc, βAge}). Two information-theoretical criteria for evaluating such hypotheses are GORIC and GORICA. We will introduce you to these methods, which provide evidence for hypotheses.
Day 3: Evidence Synthesis using GORIC(A)
Day 3 will cover evidence synthesis using GORIC(A). When multiple studies, regardless of their design, examine the same central hypothesis, the evidence from these studies can be combined. This process, known as evidence synthesis or support aggregation, is similar to meta-analysis for studies with different designs. We will demonstrate how to perform evidence synthesis using GORIC(A).
Note: This workshop does not cover traditional meta-analysis (i.e., fixed or random effects models) or Bayesian meta-analysis. Instead, it will teach you an alternative method to aggregate results from multiple studies by combining evidence for a hypothesis from various studies.
Organizational structure of the course
The workshop itself comprises lectures with hands-on parts and a lab meeting. During the lab meeting, participants can work on exercises provided by the lecture. This can be done individually but also in groups. If of interest, participants can also apply the learned methods to their own data / own projects.
The lecturers will be available for questions, both regarding the provided exercises and the application to your own data. Depending on wishes and time, the lecture can also discuss one or more exercises and/or applications in a plenary fashion.
Target group
The course is suited for you:
Learning objectives
By the end of the course, you will:
Prerequisites
The workshop itself comprises lectures with hands-on parts and a lab meeting using R.
No prior experience with R is assumed, but some familiarity with R would be very useful.
Software and hardware requirements
You will need to bring a laptop to successfully participate in this workshop. We will work with R and Rstudio. You should install the latest version of R (https://cran.r-project.org/) and RStudio (https://posit.co/download/rstudio-desktop/) before the start of the workshop.
We will work with the goric function from the R package 'restriktor', the newest one; possiblye, a version not yet available on CRAN but on github. To install and load restriktor, please use the following R code:
if (!require("restriktor")) install.packages("restriktor")
library(restriktor)
If you want to use restriktor from github:
if (!require("devtools")) install.packages("devtools")
library(devtools)
install_github("LeonardV/restriktor")
library(restriktor)