We’re advised to ‘follow the science’ but some of it is wrong, fraudulent or not even science
By Glenn Reynolds
In recent years, there have been a lot of catchphrases around science: “Follow the science!” “We believe in science!” Even “The science is settled!”
Well, sometimes it’s not settled. Sometimes it’s not even really science. But lots of people believe in it or follow it anyway. It’s a global problem.
Most recently, we learned that a widely noticed 2012 study co-authored by Dan Ariely — whom the journal Science refers to as a “superstar honesty researcher” — was based on fake data.
Ariely is indeed a superstar, and his work is highly influential. He’s written multiple New York Times bestsellers. He founded a center at Duke University. And his research has affected the policies of corporations and government institutions.
Ariely’s 2012 paper found that people were more honest when they signed a promise to be honest at the beginning of a transaction than when they signed the same promise at the end. The idea was that the early exposure to the importance of honesty set the tone. The Obama administration’s Social and Behavioral Sciences Team recommended this approach to the government. It seemed like a cheap and easy way of promoting good behavior.
The only problem is, it’s not true. Other scientists found that his work couldn’t be replicated. And a deep dive into the data Ariely used determined that it couldn’t possibly be correct. Even Ariely agrees that the criticisms are “damning” and “clear beyond doubt.”
Did Ariely commit fraud — he says no — or was the data set he got from an insurance company faked for some reason? People are looking into that, but in a way the problem is bigger. Whether or not it was Ariely’s fault, a study that influenced policy turns out to have been baseless. And scientific peer review, often defended as the gold standard for research, didn’t spot the problem.
But lots of stuff gets past peer review. Back in 2018, several hoaxers slipped works dubious on their face past peer review and into publication. One study, which made it into the journal Sex Roles, employed “thematic analysis of table dialogue” to determine why heterosexual men go to Hooters, a question that would seem to answer itself. Another looked at “Human reactions to rape culture and queer performativity at urban dog parks in Portland, Oregon.” And a third just scattered some modern buzzwords into translated passages from “Mein Kampf” and was published under the title “Our Struggle Is My Struggle” in a journal of feminist social work.
Meanwhile, leading names in the field of social psychology turn out to have committed research fraud to an extent that it tainted the entire field. And as the Wall Street Journal reported, “One noted biostatistician has suggested that as many as half of all published findings in biomedicine are false.”
Research on “implicit bias” drives all sorts of campus and government policies on race and diversity, but the Implicit Association Test underlying it turns out to be highly dubious. In 2012, the firm Amgen set out to reproduce the results in 53 “landmark” studies in hematology and oncology. Only six of them replicated.
Indeed the term “replication crisis” is now often used to refer to a situation in which so many major and influential studies don’t produce the same results — or any results — when other researchers set out to test them. And it really is a crisis.
At one level, the problem is that billions in research money is wasted.
But really, the problem is worse: Bad research guides behavior — whether it’s government policy or drug development budgets or energy research — in the wrong direction.
Producing such research is a natural temptation, conscious or subconscious, for scientists. Success depends on funding, and funding agencies want results. So do university administrations. And all too often, both are as interested in something that produces headlines, and headlines often drive policy.
With modern tools, it’s easy to torture a data set to produce some sort of interesting-sounding result, even if it’s not really valid. And that’s before we get as far as outright fraud.
Even more dangerous than the things we don’t know are the things we think we know that are wrong. Bad science produces things that sound important — maybe because they match our prejudices — but that are wrong. That’s not science at all, and we should neither believe in it nor follow it.
Glenn Harlan Reynolds is a professor of law at the University of Tennessee and founder of the InstaPundit.com blog.