Skip to content
Breaking News Alert Biden Quietly Commuted Sentences Of Chinese Spies

What Does ‘The Science’ Say? It’s Getting Harder To Tell

Share

In late January, President Joe Biden issued a statement committing his administration to restore the public’s trust by “[making] evidence-based decisions guided by the best available science and data,” without bias from political preferences. Sounds like a great idea, and in an age of constant misinformation and confusion, likely to be well-received on both sides of the aisle.

But what if the scientific evidence itself is flawed? Take, for example, Biden’s plan to double the federal minimum wage to $15 an hour. Some of the “science” on the issue says raising the minimum wage will profoundly benefit the economy and average quality of life.

Too often, however, fact-checking stops as soon as someone cites a study published in an academic journal with a fancy-sounding name. What people don’t realize is that data conflicts all the time, and everyone — yes, even well-credentialed academics — makes mistakes.

Consider again the minimum-wage issue. Contrary to what political spokespeople would have you believe, the totality of scientific literature on the effects of raising the minimum wage contains a substantial amount of conflicting evidence. Even on controversial and highly scrutinized topics such as the coronavirus, there have been errors of quality control on the rigor and reliability of the scientific citation.

Experts aren’t infallible. Indeed, the sooner we recognize this truth and then take steps for quality control, the sooner we can mitigate the often unintentional spread of misinformation and poorly informed policy decisions.

As someone who both works in academia and writes for traditional media, I’ve been struck by the lack of public awareness about different levels of quality and data transparency in scientific journals. Just because a study has been published in a scientific journal doesn’t mean it’s conclusive or even constructive evidence to support an argument.

Today, more than 2.5 million academic journals are publishing annually, quarterly, or even monthly worldwide, while the number of journals grows each year. This “academic proliferation” has created new problems, including predatory pay-to-publish journals and poor peer-review quality control.

For instance, Cabell’s Blacklist lists more than 10,000 journals that target desperate academics, offering them publication based on how much they can pay, rather than the quality of their work. When you think about the way we use these journals, that’s an incredibly troubling reality. Scientific publications are cited as evidence because, supposedly, they have been thoroughly reviewed by experts and checked for any errors in their data analysis or study design.

Unfortunately, however, poor-quality “scientific” papers exist in abundance, and they’re hard to identify even for those who generally know what to look for. The only metric academics currently use to evaluate journal quality is the “impact factor,” a figure showing the number of articles a journal has published or the number of times an article is cited by other researchers.

Still, the impact factor is akin to a Yelp rating, providing only a measure of quantity and popularity, with no standard of objective quality. As such, to boost a journal’s ranking, the impact factor is sometimes manipulated through questionable research practices.

While academics may know what the “best journals” are through personal experience in writing, submitting, reviewing, and reading these journals, the general reader is often left out at sea. For those who aren’t specialists in the given academic topic, there’s simply no method of identifying which journals are reputable and trustworthy.

Additionally, even if the general reader were aware of what journals are reputable, researchers still make mistakes. Statistical analysis is often overly complicated and opaque. A journal may print a study one month, only to discover at a later date the data was flawed. The journal may print retractions, but not before the article has been cited numerous times in the press, or the evidence has been used to support a policy decision (like one of Biden’s executive orders).

To address this situation, academics need to take steps toward quality control and journalists and policymakers must be sure to cite only high-quality research.

Many have suggested much-needed reforms in the incentive structure of rapid publishing found in academia. The pressure to increase in quantity at the expense of quality for hiring or promotion or research grant funding is a core driver of academic proliferation. Moreover, recent organizations such as the Open Science Framework have set standards for data transparency allowing anyone to check researchers’ work to ensure integrity in the analysis.

Meanwhile, policymakers are starting to introduce more stringent guidelines on how to use academic research. The Environmental Protection Agency, for example, recently finalized a rule that codifies internal requirements to check for data transparency before relying on research for decision-making.

Ultimately, we should all be wary when someone claims “the science says…,” as far too many of us take facts on faith so long as they support the policies of our preferred political party.