Researchers rely on statistics in their work and p-values are a commonly used statistical tool, including in fisheries and marine conservation science (among many other fields). P-values are widely interpreted as a way to determine the probability that a null hypothesis is true or false. A p-value less than 0.05, for example, is often taken to mean an experiment’s findings are “significant” and the null hypothesis should therefore be assumed false.
But that assumption is a fundamental misunderstanding of what p-values actually tell us. As data scientist Mike Hay points out in three recent blogs on OpenChannels.org (first, second, and third), a p-value is really the probability that, if the null hypothesis were true, we would obtain a statistical result as extreme or more extreme than we got with our data. That is quite different from how it is typically applied and understood.
As evidence of that difference, Hay shows how a “significant” result under 0.05 could actually have more than a 50% chance of being incorrect(!). “Calling everything with p < 0.05 ‘significant’ is just plain wrong,” he writes. Hay works for OCTO, which also publishes MPA News.
Interpreting p-values incorrectly presents a threat to the credibility and accuracy of scientific results, he writes. The fields of psychology and medical science are currently experiencing replication crises due to the use of poor statistical methods. “Fisheries and marine policy have many of the same risk factors driving the unreliability of scientific findings in those other fields, but no one has actually attempted to do any replication studies yet,” says Hay.
His blogs are a plea to marine scientists to tighten up their statistical methods. Hay recommends greater use of Bayesian statistical models, which can more easily avoid the pitfalls of p-values.
For more information:
Mike Hay, OCTO. Email: firstname.lastname@example.org/~octogr5