What’s Fueling Our Growing Loss of Faith in Big Science?
By Bill Frezza
January 11, 2013 | The Skeptical Outsider Guest Column| The scientific method is arguably one of the key pillars of Western Civilization. Ironically, the power of science has become so well established that it is now taken as an article of faith by politicians and voters who wouldn’t know the difference between good science and bad if it bit them in the keister. As a result, no society in history has provided as much public, private, and corporate science funding as the United States.
Now, with runaway entitlement spending forcing politicians to search for savings amidst discretionary budget items, don’t be surprised if the tide begins to turn against the hundreds of billions of dollars Uncle Sam dishes out to scientists every year. What will legislators do when they start asking whether taxpayers are getting their money’s worth and learn that a disturbing amount of work published in leading scientific journals is not reproducible? How will proponents defend the 40-year War on Cancer’s increasingly diminishing returns? Trumpeting the latest developments in gene sequencing starts to gets old as more and more time passes with only minimal clinical impact.
Think about the modern business model of Big Science – an interconnected set of interests whose tentacles extend into academia, foundations, and major corporations. Advocates of a variety of causes across numerous fields—from health care to agribusiness to energy and the environment—selectively promote scientific results produced by legions of scientists, some of whom are independent and others not. These pronouncements are generally aimed at attracting more public and private research funding, selling more goods and services, or impacting laws and regulations that control the selling of goods and services. Sounds science helps policymakers and consumers make wise choices. Bad science, not so much.
Critics who question these pronouncements concerned about the influence of bad science are often denounced as “anti-science”—their challenge to a given prediction, policy, hypothesis, or request for funding equated with rejecting the scientific method itself. Like accusations of blasphemy, charges of being a “science denier” are difficult to defend against, especially when partisans turn to demonization and ad hominem attacks.
You would think such tactics would be beyond the pale, given that the scientific method is strengthened by a healthy skepticism that is best satisfied by the delivery of reproducible data subject to open scrutiny. But that has not been the trend. Just look at the debate on global warming.
Implicit in our faith in science is a belief that the scientist is a dispassionate observer, a seeker of truth ready to follow the data wherever it leads, and an objective scribe recording and reporting the results of properly constructed experiments. The public’s faith would be challenged if people got the idea that corruption could influence the behavior of scientists.
And yet, that is exactly what is happening.
Corruption is a complex, multifaceted phenomenon. It can be overt, as when a corrupting influence buys a predetermined scientific outcome from researchers prepared to manipulate data to promote a lie. Examples of alleged commercial corruption—some well founded and others not—include cancer studies funded by the tobacco industry, GMO safety studies funded by agribusiness interests, anti-GMO studies funded by environmental activists, pump-and-dump biotech startups reaching for that big score, and the testimonials of medical experts secretly on the payroll of peddlers of ineffective pharmaceuticals, questionable diagnostic tests, or unnecessary surgical procedures. How does one separate the good science from the bad?
But corruption can also be more subtle, a cousin of moral hazard. Moral hazard is generated when incentives (both financial and non-financial) lead some scientists to fool not just the public and their peers but themselves, all while maintaining a genuine belief in their own integrity. This type of corruption relies not on the outright manipulation of data but on cherry picking, confirmation bias, poor controls, and the willingness of legions of scientists-in-training to deliver questionable results in return for their Ph.D. diplomas, which confer acceptance into the guild and future access to their own research grants.
Circumstantial evidence is mounting that moral hazard based corruption is becoming frighteningly commonplace, as study after study points to the increasing number of irreproducible results in peer-reviewed publications. Troubling anecdotes are also emerging from behind the screen of omerta that shields Principle Investigators (PIs), peer reviewers, and thesis committees from scrutiny, including tales of foreign students told their visas will not be renewed if they don’t deliver experimental results confirming their PIs’ pet theories.
The political left vigorously promotes the idea that the profit motive is ipso-facto evidence of corruption, ignoring corruption in academia and publicly funded research programs as if the renunciation of profits were ample proof of purity. Similarly the political right often ends up making excuses for shoddy research when it’s conducted by for-profit practitioners, promoting the idea that the market can always police itself. Meanwhile they assume that science produced by government funded laboratories that is used by bureaucrats to set policies that invariably demand the expenditure of large quantities of other people’s money, are corrupt.
There Is Only One Solution
In the end, the concerns of both kinds of critics can only be addressed by stepping up the quality of science, even at the cost of reducing its quantity. This is especially important in an era of tightened public science funding and reduced pharmaceutical R&D budgets. Nothing would be lost and much would be gained if second- and third-rate scientists producing irreproducible research had to seek a living elsewhere.
Stepping up the quality of science requires a dramatic change in the haphazard way in which Big Science deals with accountability. Too many peer reviewers are ready to give their colleagues a pass, aware that they must pass through the same filter to publish. Too many excuses are made about the “complexity” of modern experiments that cause them to fail when independent third parties attempt to repeat them. If it’s not repeatable, it’s not science!
Yes, experiments really are getting more complex, generating enormous quantities of data. But these data are only as good as the last calibration of the myriad pieces of equipment used to collect them, the quality controls on incoming materials, and the rigorous tracking and reporting of both successful and failed experiments to allow for root cause analysis. Comprehensive data logging should be an absolute requirement for anyone who calls himself a scientist, reducing the practice of publishing data from the one experiment in 10 that “succeeds.” If regulators are going to force pharmaceutical companies to publish data from failed trials, shouldn’t government-funded academic scientists face the same standard?
The semiconductor industry long ago solved the experimental replication problem, achieving a level of sophistication that puts the life sciences industry to shame. Solid state physics and chemistry is no less exacting than biology. The tools and experimental techniques are no less complex. Upgrading modern semiconductor plants from one node to the next requires an exquisite degree of control, monitoring, documentation, and analysis. They solved these problems because getting it wrong costs them money. What is the cost to a life scientist of a bogus publication if it builds his CV? What is the cost to a pharmaceutical company if it helps them sell into yet another ineffective medication into a market full of ineffective medications?
In the life science world, and particularly in the academic community, many critical tasks are left in the hands of graduate students working for PIs who couldn’t operate, much less calibrate, the equipment used in their labs. Moral hazard takes its toll, as many of these poorly paid lab slaves with indeterminate graduation dates eventually reach the point where they will do anything to please their masters as that is the only ticket to emancipation.
Finally, even when modern life science is done right and the data and meta-data are properly collected—not just the experimental results but the calibration runs, the incoming materials inspection, and all the controls—the results are usually reported, reviewed, and disseminated only after being reduced to splotches of ink on sliced trees. This is absurd. In the age of the internet, scientific studies should be published in online-accessible rich data formats. This would allow both peer reviewers and would-be replicators to dive deeply into all of the data, spot checking results, confirming controls and calibrations, and performing comprehensive meta-analyses across related data sets produced by different laboratories.
New electronic peer-review journals such as GigaScience and eLife point the way. Of course, these startup journals don’t confer prestige on their authors, which is the primary goal of most scientists. But rich data publishing must eventually become the norm if there is any hope of exposing both the fraudulent and the incompetent. The question is, will this happen soon enough to stop the credibility erosion undermining Big Science threatening the funding that faith in science buys?
# # #
The Skeptical Outsider is a contributed column by Bill Frezza. Frezza is a fellow at the Competitive Enterprise Institute and a Boston-based venture capitalist. Bill's collected columns, TV, and radio interviews can be found here. If you would like to have his columns delivered to you by email, click here or follow him on Twitter @BillFrezza. The views in this column are those of the author and do not necessarily reflect the opinion of Bio-IT World.