Failing to correct bad scientific data and politicizing it has made Covid-19 crisis worse
By John Ioannidis
A person suddenly collapses on the floor — what do you do? Given the choice between acting or not acting, surely every reasonable person will say we need to act without hesitation.
But how? We first quickly collect the available data: we check whether the collapsed person has a pulse, whether he’s breathing, whether he responds to verbal cues. If not, we suspect cardiac arrest and immediately start CPR — but still we try to collect both new and better data as we go along. If a blood pressure monitor becomes available and we find the patient’s pressure is fine and his pulse is regular — though we didn’t even feel one at first — everything changes; the situation is not as dire as we had thought. Perhaps he begins talking, though still his breathing is labored: our chest compressions have broken his ribs. If we don’t stop CPR, the bone may pierce his lungs, causing a tension pneumothorax — a life-threatening condition that must be treated right away. Despite our best intentions, we can kill the patient if we do not change our course of action.
The first question in emergencies, this example teaches, is not whether to act. It is rather how to act to ensure our actions do more good than harm. Populations are not individual patients, of course, but the lesson is important for thinking about the debate over the right response to the Covid-19 crisis. In his recent essay in these pages, the philosopher of medicine Jonathan Fuller sheds light on this debate by describing two opposing traditions in epidemiology: one, public health epidemiology, that relies on modeling and a diversity of data, and another, clinical epidemiology, that prizes high-quality evidence from randomized studies. In an equally thoughtful response, the epidemiologist Marc Lipsitch elaborates on what that opposition gets wrong.
Both Fuller and Lipsitch have eloquently expressed the simultaneously competing and coexisting worlds of models and evidence. I hope that we would all agree that we need both. Science is difficult; we cannot afford to look away from useful data, disciplines, approaches, and methods. I love science because most of the time I feel profoundly ignorant, in need of continuous education; I am grateful to all my colleagues — no matter their discipline — who help reduce my ignorance. At the same time, we should study the strengths, weaknesses, and complementarity of various approaches. The main challenge in epidemiology, in particular, is how to translate what we know — and what we know about what we know — into the best course of action.
As Lipsitch wisely suggests, infectious disease epidemiology and clinical epidemiology are not necessarily two opposing stereotypes; almost always they are intermingled. And as Fuller acknowledges in passing, they can coexist in the same research agenda, in the same institution, even in the same person. Most scientists cannot be slotted in one bin or the other; they struggle to make their brains work in different paradigms. Both essays classify me under the evidence-based medicine (EBM) umbrella, but while it is true that I have written papers with “evidence-based medicine” in the title, I have no official degree in EBM. When I trained in the field with the late Tom Chalmers and Joseph Lau, there were no degrees of that sort. The term “evidence-based medicine” itself wasn’t coined until 1992 by clinical epidemiologists at McMaster University in Canada. Even now, almost thirty years later, in most places most scientists and physicians still have no clue what EBM really is. My official fellowship training, in fact, was in infectious diseases.
Regardless of the difficulty of classifying scientists in bins, however, science does work eventually, as researchers share knowledge and correct misconceptions. And even if we take the stereotypes of the two traditions for granted, their features ought to be reversed in one respect. In a certain sense, it is clinical epidemiology that tends to be more pragmatic, and thus more action-oriented, than its foil. Traditional epidemiology — including research programs on mechanisms of disease — can be far removed from questions of action, for good reason: basic science has great value in itself for learning about nature and modeling its mysteries. By contrast, EBM, in particular, argues for less theory and more real-world results, less speculation and more focus on the outcomes that matter most.
To put it crudely but sharply, the EBM sensibility is that theories don’t count for much when they don’t save lives. That process of saving lives focuses on decisions of action. Practitioners of EBM know full well that failing to act has consequences; a central lesson that it teaches is that you’d better choose wisely what you do — and what you don’t.
What does all this mean in the case of Covid-19? On March 3 the World Health Organization (WHO) director-general introduced a media briefing with these distressing words: “Globally, about 3.4 percent of reported COVID-19 cases have died. By comparison, seasonal flu generally kills far fewer than 1 percent of those infected.”
Others spoke of a very high reproduction number, of almost no asymptomatic infections, and of the high likelihood that the virus would infect most of the global population. Many, including the team led by Neil Ferguson at Imperial College London, drew comparisons to the 1918 pandemic, which cost at least 50 million lives. These claims had a dramatic and arguably dangerous impact on public perception. Moreover, if these claims had been true, any EBM practitioner would call for swift and thoroughgoing lockdown measures. EBM is dead-clear in such situations: if the risk is 50 million deaths, shutting the world for a month or two is nothing.
But it was my infectious disease side that had questions. A virus that spreads like wildfire, killing one out of thirty and infecting almost everyone in the absence of a vaccine, should have killed far more people in China and should have spread widely worldwide, perhaps with millions of fatalities, by mid-March. Hence, as I wrote in an op-ed in Stat News, I began to plead that we seek to obtain better data as quickly as possible to best inform our actions. I think lockdown was justified as an initial response, given what little we knew about this new virus, but I also think we needed better data to decide on next steps. And given what we know now, it is reasonable to consider alternatives to population-wide lockdown, even as we continue preventive hygiene measures, exercise local infection controls, focus on protecting those most at risk, and support health care systems to care for patients who are sick.
Two and a half months after Covid-19 was officially declared a pandemic, we lament a great and acute loss of life, especially in places like Lombardy and New York. Since the outbreak was detected in Wuhan in December 2019, the global death toll is estimated to be 346,000 as of this writing. But because our interventions can harm as well as help, it is not unreasonable to put this number in context.
We now know that the death toll is not comparable to that of the 1918 pandemic. We also now know that the virus has spread widely, but for the vast majority of people it is far less lethal than we thought: it kills far fewer than 3.4 percent of those who develop symptoms. Late last week the CDC adopted an estimated death rate of 0.4 percent for those who develop symptoms and acknowledged that there are many other infected people who develop no symptoms at all. These estimates will continue to improve as time goes on, but it is clear that the numbers are much lower than first feared. The exact infection fatality rate varies across populations and settings, but it appears that in most situations outside nursing homes and hospitals, it tends to be very low.
We have learned that Covid-19 is yet another disease that unfortunately and disproportionately affects the elderly, the disadvantaged, and those with multiple underlying medical conditions. Besides massacring nursing homes, and having the potential to infect many vulnerable patients and providers in hospitals, it painfully emerges as yet another disease of inequality. The poor, the homeless, people in prisons, and low-wage workers in meat-processing plants and other essential jobs are among the hardest hit, while privileged people like me are videoconferencing in safety. That is a tragic disparity.
At the same time, we should not look away from the real harms of the most drastic of our interventions, which also disproportionately affect the disadvantaged. We know that prolonged lockdown of the entire population has delayed cancer treatments and has made people with serious disease like heart attacks avoid going to the hospital. It is leading hospital systems to furlough and lay off personnel, it is devastating mental health, it is increasing domestic violence and child abuse, and it has added at least 36.5 million new people to the ranks of the unemployed in the United States alone. Many of these people will lose health insurance, putting them at further risk of declining health and economic distress.
Prolonged unemployment is estimated to lead to an extra 75,000 deaths of despair in the United States alone over the coming decade. At a global level, disruption has increased the number of people at risk of starvation to more than a billion, suspension of mass vaccination campaigns is posing a threat of resurgence of infectious diseases that kill children, modeling suggests an excess of 1.4 million deaths from tuberculosis by 2025, and a doubling of the death toll from malaria in 2020 is expected compared with 2018. I hope these modeling predictions turn out to be as wrong as several Covid-19 modeling predictions have, but they may not. All of these impacts matter, too. Policymakers must consider the harms of restrictive policies, not just their benefits.
Good science can come from public health epidemiology, from the study of infectious diseases, from evidence-based medicine, from clinical epidemiology, or from any discipline. I agree with Lipsitch that we need to respect the totality of the evidence — including, I would stress, evidence about the harms of prolonged lockdown — rather than rely too narrowly on the claims of any one disciplinary specialty. At the beginning, in the absence of high-quality data, we can do what seems most reasonable, following the precautionary principle and using common sense.
But beyond this point, failing to correct our ignorance and adapt our actions as quickly as possible is not good science. Nor is politicizing scientific disagreement or looking away from the undeniable harms of our well-intentioned actions.
___________________
John Ioannidis is an infectious disease researcher at Stanford University.
Credit: Boston Review