This post is a part of a series on COVID-19 and the Coronavirus Pandemic
Expert knowledge is always a challenge for democracy, which values the wisdom of crowds and commoners. Ideally, experts, including scientists, provide the facts and the voice of the people weighs in on values. In real life, particularly now, it’s become far more complicated, demonstrated most clearly in the dilemmas of leadership in confronting COVID-19.
Consider Scott Atlas, a physician with a medical degree from the University of Chicago selected to serve on the White House’s Coronavirus Task Force. More significantly, much more significantly, Atlas echoed statements from the White House that many Americans were overreacting to COVID-19 and that policy should value the risks of economic and social disruption at least as much as protecting public health through restrictions. He was also dubious about the proposition that face masks could slow the spread of the disease. He apparently auditioned for the job by appearing on Fox News, promoting the return of college football, and arguing against measures that limited social contact among the non-elderly because that would slow the spread of herd immunity.
Atlas provided the Task Force with an ostensibly credentialed spokesman to serve as an alternative and a counterpoint to the advice offered by the experts already serving on the Task Force, Deborah Birx, an immunologist, and Anthony Fauci, a specialist in infectious disease. Atlas, however, doesn’t bring comparable expertise; he is a neuroradiologist, who developed a reputation in reading Magnetic resonance imaging (MRI) scans of the brain. If you think that reading such scans offers little insight into the spread of an infectious virus, you would be with the expert consensus. But over the past few years, Atlas had moved from the practice of medicine to politics anyway, finding a place at the Hoover Institute, where he wrote books and articles arguing for the repeal of the Affordable Care Act, and gradually privatizing Medicare and Medicaid. Over the past decade or so, he’s done little in the way of treating patients or reading scans, and a lot of policy advocacy.
Certainly, Atlas’s explicit advocacy, in conjunction with his medical degree, made him an attractive advisor to the Trump administration. Minimally, Atlas could provide a voice for less restrictive policies embedded in a medical degree. There’s every reason to believe that his policy stance, more than his professional expertise—reading brain scans—is what brought him to the task force. After all, don’t we always like experts—and celebrities for that matter—who agree with us?
The problem of screening expertise
The coronacrisis provides a disturbingly clear illustration of increased distrust for expertise among Americans, and will likely exacerbate a growing polarization not just on political stance, but on the information used to justify—rather than guide—decisions on matters of policy. This polarization and distrust gives us a reason to think through the criteria for expertise, and the dangers of using stance as a filter that excludes experts who may tell us what we don’t want to hear.
Excuse me for reporting a personal experience that illustrates, I think, both the problem and steps toward a solution. Not long ago, I was distressed when my dentist gave me bad news about a back tooth. I couldn’t see what he saw on the x-ray, much less in the back of my mouth, and his judgment suggested more time in his chair for me, at some cost of money, inconvenience, and (probably) pain. I could have gone home and asked one of my children or colleagues to take a look with a flashlight, sure that they wouldn’t recommend anything as scary as what the dentist suggested, but I trusted that my dentist had some expertise, and a commitment to my well-being. I could have cited as evidence the degrees on his wall or the warnings he’d given me for years prior, but basically, I was willing to accept his news—even though it was unpleasant. I knew he could see things that I couldn’t.
Everyone has had a similar experience with something, trusting a mechanic’s advice about a fuel pump, a pediatrician’s vaccination guidance, or an accountant’s recommendation about tax payments. Everything isn’t intuitive, and specialized expertise matters. There are situations that demand sharp questions and second opinions, but in normal daily life, we trust a lot of the time. In order to function effectively, we need to be able to hear news that we would rather not hear sometimes, and to be prepared to act on it.
More and more, however, public life and governance works differently. On matters far beyond the experience and expertise of us, people express strong opinions, citing ostensible experts to back them up: Do masks help reduce the spread of a virus? Is hydroxychloroquine an effective treatment for infection by a novel coronavirus? What share of the population needs to be exposed to the virus to establish herd immunity? Is herd immunity possible?
The practice of science promises a process to finding answers to such questions, although certainly not the values we apply in making decisions about how to apply those answers. The debate about values is properly a democratic concern, but one that functions well only when informed by a range of credible factual and scientific judgments. When political advocates and activists filter out information they find inconvenient, the utility and civility of public life suffer.
So, who do we trust for expert judgment? The criteria for expertise are achingly obvious and conventional: specialized training in a relevant area, experience in working on similar problems, and engagement in a professional community which shares a set of methods for adjudicating disputes. In truth, the COVID-19 case provides a particularly garish example of the costs of ignoring the relevant experts—professionals from public health, epidemiology, and infectious disease—generally agree. Over time, the scientific process usually pushes the expert community to greater consensus, as we’ve seen on cigarette smoking, vaccines, climate change, and the shape and position of the Earth. Dissenters on the heliocentric solar system can find authorities, but not ones who display conventional criteria of relevant expertise.
Of course, expert consensus on contentious issues doesn’t always come quickly or easily, and competing movements find and promote their own experts. The nuclear physicists who designed the first atomic bombs came to endorse different positions in the development of the nuclear arsenal and the utility of arms control, differing on values, but not so much on their areas of expertise. The result, however, was that advocates of vastly different policy positions could deploy their own credible experts.
But the rest of us need to be suspicious of promoted experts whose audience and status comes from stance. We need to be wary about picking among competing scientific judgments on the basis of a finding, rather than a process. For example, credible scientific studies offer estimates of the share of the population that must be vaccinated to achieve herd immunity that range from less than 20 percent to more than 70 percent. The absolute worst thing we can do is to pick the number that makes us feel most comfortable. Best, but often beyond common capacity, is to evaluate the methods deployed to come to those conclusions. More realistically, we have to buy into a scientific process where answers to questions are pursued and the truth comes a little closer over time. And we need to respect experts who are willing to offer unpleasant or inconvenient news.
David S. Meyer is an author and professor of Sociology and Political Science at the University of California, Irvine.
Support from UCS members make work like this possible. Will you join us? Help UCS advance independent science for a healthy environment and a safer world.