CALGARY, AB, Oct 4, 2012/ Troy Media/ – Responding to a critics observation that most science fiction is junk, writer Theodore Sturgeon noted that the same could be said for just about everything. Sturgeon’s Revelation, also referred to as Sturgeons Law, is that; Ninety per cent of everything is crud.
Everything includes fiction & non-fiction writing, film, television, news stories, consumer products, rules & regulations and what we hear from industry and government. Probably no surprise there. What is a surprise, there’s a scientific basis for Sturgeon’s Revelation, making it true for formal scientific and academic research because of how statistical significance is misused.
We’ve all become familiar with the language of statistical significance in poll results peppering the news around election time. When told that, ‘74% of Canadians are in favor of something, plus or minus 3%, nineteen times out of twenty’, that’s statistical significance. So is news that Obama’s four-point lead is beyond the ‘margin of error’. Polling, is one of the few places where statistical significance calculations are applied and interpreted properly, informing us of poll accuracy.
While it tells us about poll accuracy, statistical significance says nothing about the importance of those results. Is Obama’s 4 point lead big, insurmountable, or important? That’s a question for political pundits not statistics. Statistical significance answers ‘How Much?’ but can’t answer ‘So what?’. Where research goes off the rails, is pretending otherwise.
For example, recently, the Memorial Sloan Kettering Cancer Center released a study concluding “the most robust evidence to date that acupuncture is a reasonable referral option“. A lead researcher is quoted as saying; “The effects of acupuncture are statistically significant . . . so we conclude aren’t due merely to the placebo effect“. Whatever, you may think about acupuncture, conclusions like this are delusional.
The misuse or misinterpretation of statistical significance is “Why Most Published Research Findings Are False“. The classic paper by John P. A. Loannidis notes;
“Several methodologists have pointed out that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. . . . It can be proven that most claimed research findings are false.”
How often are they false? How often are statistically significant findings of no practical importance? Results vary but here’s a nice symmetry with Sturgeon’s Revelation, a rough estimate is about 90% of the time.
When next you hear or read about some latest health research claiming statistically significant results, remember, the results are probably wrong. Also, the more a study pumps it’s statistical significance, the greater the likelihood its crud. Good research mentions statistical significance in passing, focusing instead on material significance.
The same is true for government policy and program evaluation research in areas such as economics, housing, education, social and human services. If a study trumpets statistical significance of the findings, it’s probably crud.
Businesses survey customers making product and service redesign decisions based on statistically significant findings. Over 90% of these findings are false. That explains why all those ‘new & improved’ products aren’t.
If you work for a big company, you’ve probably participated in an employee engagement survey. Statistically significant increases or decreases in employee engagement scores often influence the distribution of rewards within an organization, but as over 90% of these results are wrong, this destroys employee engagement rather than enhance it.
Other examples are detailed in The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives by Stephen Ziliak and Deirdre McCloskey. In providing a positive review, Nobel Laureate Thomas Schelling stated he couldn’t fathom why statistical significance is still used in this way.
I can. It’s because it’s so easy. Statistical insiders know using statistical significance this way is crud, but so what? There’s no cost to this lying with statistics.
Until now perhaps. The United States Supreme Court, recently rejected the claim by Matrixx Initiatives that it wasn’t required to report side effects of Zicam, because of a lack of statistical significance. The Court sided with science saying “medical professionals and regulators act on the basis of evidence of causation that is not statistically significant“. In other words, statistical significance, or the lack of it, is not the same thing as a scientific finding and, therefore, not a defense for anything. This rightly opens the door for lawsuits against those that lie to people by selling statistical significance as practical importance.
Theodore Sturgeon was more right than he knew.
Comments are closed, but trackbacks and pingbacks are open.