Publications by FelixS
A comment on “We cannot afford to study effect size in the lab” from the DataColada blog
In a recent post on the DataColada blog, Uri Simonsohn wrote about “We cannot afford to study effect size in the lab“. The central message is: If we want accurate effect size (ES) estimates, we need large sample sizes (he suggests four-digit n’s). As this is hardly possible in the lab we have to use other research tools, like onelin studies...
7970 sym 4 img
Reanalyzing the Schnall/Johnson “cleanliness” data sets: New insights from Bayesian and robust approaches
I want to present a re-analysis of the raw data from two studies that investigated whether physical cleanliness reduces the severity of moral judgments – from the original study (n = 40; Schnall, Benton, & Harvey, 2008), and from a direct replication (n = 208, Johnson, Cheung, & Donnellan, 2014). Both data sets are provided on the Open Science ...
15005 sym 4 img
What does a Bayes factor feel like?
A Bayes factor (BF) is a statistical index that quantifies the evidence for a hypothesis, compared to an alternative hypothesis (for introductions to Bayes factors, see here, here or here). Although the BF is a continuous measure of evidence, humans love verbal labels, categories, and benchmarks. Labels give interpretations of the objective index...
9975 sym 68 img 1 tbl
A Compendium of Clean Graphs in R
[This is a guest post by Eric-Jan Wagenmakers and Quentin Gronau introducing the RGraphCompendium. Click here to see the full compendium!] Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almos...
12836 sym 6 img
What’s the probability that a significant p-value indicates a true effect?
If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate). Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate), which indicates the probab...
7170 sym 16 img
Optional stopping does not bias parameter estimates (if done correctly)
tl;dr: Optional stopping does not bias parameter estimates from a frequentist point of view if all studies are reported (i.e., no publication bias exists) and effect sizes are appropriately meta-analytically weighted. Several recent discussions on the Psychological Methods Facebook group surrounded the question whether an optional stopping proce...
17723 sym 8 img 1 tbl
Introducing the p-hacker app: Train your expert p-hacking skills
[This is a guest post by Ned Bicare, PhD] Start the p-hacker app! My dear fellow scientists! “If you torture the data long enough, it will confess.” This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis. In fact, the art of creative data analysis ha...
4791 sym 12 img
Correcting bias in meta-analyses: What not to do (meta-showdown Part 1)
tl;dr: Publication bias and p-hacking can dramatically inflate effect size estimates in meta-analyses. Many methods have been proposed to correct for such bias and to estimate the underlying true effect. In a large simulation study, we found out which methods do not work well under which conditions, and give recommendations what not to use. Estim...
12013 sym 14 img