Intelligence Lessons From COVID: Being "Right" Is Not Enough

Share this content

Published:
December 8, 2020

By J. Paul Pope

On college campuses, developing new knowledge and insights are things of value in and of themselves. For intelligence officers (and for policy analysts, I would argue), value is achieved when facts and insights are persuasively communicated to policy makers, who then choose to act upon them. The faltering response to the pandemic at every level of decision making (national, state, local, and individual citizens) represents an intelligence failure of the first order, even if the facts that were reported to the government were generally correct. The consequences of this failure are so grave that it underlines how much students (and faculty) at public policy schools can stand to benefit from a better understanding of the role of intelligence, and provides important lessons that will help them avoid making the same mistakes when they become policy makers themselves.

The faltering response to the pandemic at every level of decision making (national, state, local, and individual citizens) represents an intelligence failure of the first order.

Resiliency Toolkit
Full Report (University of Texas Libraries)

Epidemiologists will use their models for years after the pandemic is over to examine how the panoply of national and state responses altered outcomes. The dependent variable in their analyses will be the numbers of Americans who died. Put another way, they will prove that policy matters. By extension, then, analytic support to policy matters. The inescapable truth with regard to the pandemic is that while it was inevitable that the virus would reach our shores causing some loss of American lives, it was within our power to alter the equation. Had we acted upon the available information and analysis earlier, more effectively, with greater discipline, and with more unity, the number would have been a fraction of what it is—and eight months into the pandemic, the virus still shows no signs of abating. The biggest variable in terms of where we will end up along the spectrum will not be the availability of information but how individual decision makers used it.

Was COVID an "Intel Failure?"

Surprise, when it happens to a government, is likely to be a complicated, diffuse, bureaucratic thing. It includes…the alarm that fails to work, but also the alarm that has gone off so often it has been disconnected. It includes the unalert watchman, but also the one who knows he'll be chewed out by his superior if he gets higher authority out of bed...It includes, in addition, the inability of individual human beings to rise to the occasion until they are sure it is the occasion–which is usually too late….Finally, as at Pearl Harbor, surprise may include some measure of genuine novelty introduced by the enemy, and possibly some sheer bad luck.

—Thomas Schelling, in his foreword to Pearl Harbor: Warning and Decision by Roberta Wohlstetter[1]

Experienced instructors at the "Farm" (where the CIA trains its officers in clandestine operations) sometimes say that their most common answers to students' questions are either "You're smart, figure it out" or "It depends." "It depends" sounds like a dodge but is often, in fact, the best answer. Cookie-cutter approaches don't work in complex and nonlinear contexts like clandestine human operations or pandemic responses. But, as we told our instructors, this makes it important to explain clearly what "it depends" depends on and to articulate principles for grappling with the task or situation in question.

In the last seven months, I have been asked whether the U.S. response to the pandemic represents an intelligence failure or a policy failure or both. The answer is clear: "it depends." Specifically, it depends on how one envisions the role of intelligence. The traditional, almost universally used "intelligence cycle" model visualizes intelligence as a continuous loop of "direction/guidance, collection, analysis, and dissemination."

Figure 1: Traditional Intelligence Cycle

Using this near-universal visualization, the intelligence community, the CDC, and others can make a strong case that the failures in response to the pandemic were not with the intelligence itself. They can point to their repeated and clear warnings regarding the need to prepare for a pandemic originating in Asia, as well as their explicit tactical warnings about COVID-19. Using the traditional model of intelligence, their collection and analysis was "right" and it was properly "disseminated." The failures, accordingly, were failures of policy and policy execution. This response, while perhaps accurate, doesn't offer many lessons about where the breakdowns actually occurred or practical advice about how to avoid similar failures in the future.

What Intelligence Can and Cannot Do

Every class on intelligence I've taught at UT includes this objective in its course description: "help students understand what intelligence can and cannot do." In each of these classes, I posed this question early on: "What memories haunt retired intelligence officers the most?" In each class (so far), some student will hesitantly ask, "Not being right?" or perhaps "Getting it wrong?"

It would be a mistake to use one model to parse blame for our poor response to the pandemic, given the complex interplay of national, state, local, and individual actors who make up our complicated political system.

At the end of the course, we return to the question. In every class so far, one or more students will confidently answer: "Failing to warn decision makers" or "Failing to inform policy." By now, they understand that one of the few things intelligence officers know with certainty is that they cannot know the future. As their response reveals, the students also understand that the best intelligence can do is reduce uncertainty for decision-makers. Despite their self-awareness about their lack of clairvoyance, good intelligence officers are confident and optimistic not only that they can reduce uncertainty, but that this reduction contributes significantly to national security in the form of better and more timely decisions, whether these decisions relate to long-term strategy or crisis management.

Returning to models of intelligence, then, it turns out that we get different answers and more useful insights to assessments of intelligence successes or failures when we use a model that 1) is based on performance in support of the policymaking process instead of the intelligence processes, and 2) that includes policymakers. In this model, policymakers are part of the intelligence enterprise with roles to play that go beyond levying requirements. Beyond the inclusion of policymakers, the biggest difference with this model is that it bakes in a principle: intelligence can fail because intelligence officers got it wrong, but it cannot succeed merely because they got it right. Using this model, in my view, we must conclude that in the case of COVID-19, our national warning system failed.

Figure 2: Intelligence Performance Cycle

This model is also depicted as a cycle that includes collection and analysis, but it emphasizes that effective performance by intelligence requires:

  • Actively Asking and Anticipating. Policymakers and intelligence officers must aggressively seek to ask the right questions, and intelligence officers must also anticipate questions not yet asked by policymakers. In a large undergrad class in late January or early February of this year, I used an exercise on the COVID virus in Asia to illustrate this point. An imaginary policymaker asked the class whether it would come to the U.S. and to identify the most important questions to answer. After a half hour exercise, the answers produced were: "Yes, definitely," "How exactly does it spread?" and "How lethal is it?"
  • Persuading and Warning vs. "Dissemination." The word "dissemination" is much too passive to describe the importance and, sometimes, the urgency of getting intelligence to those who can act upon it and actively assessing whether they found it convincing. This is especially required when the aim is to warn, and it is even more true when the warning comes as an answer to questions that they have not yet asked.
  • Hearing? Believing? Acting This is the part of the performance cycle that differs most from the traditional intelligence cycle because these are all expressed as questions about the attitudes and behaviors of policy makers. f they are too busy, too inert, too distracted, or too uninterested to even make time to read or hear the intelligence, no intelligence system can work. If they did read/hear it, to what extent did they find it persuasive? Did they engage and ask questions, for example? If they rejected it was it for valid reasons, such as a lack of evidence, or because it was not consistent with their world view or it was politically inconvenient? Finally, to the extent that they were (even partially) persuaded, did they then take action in response?

We use this model because the interface between intelligence and policy is where most historical intelligence failures dwell. There are many explanatory pathologies to be examined in that space. One of the most common, which is relevant to the pandemic response (and also to 9/11), is when a warning is heard and is persuasive but the policymakers' response is to ask to be "kept informed" or to "come back as soon as you have more information." This response may be considered defensible at times, given the price of taking action, but it is, nevertheless, procrastination and wishful thinking about the ability to eliminate uncertainty. It almost invariably results in losing the advantages inherent in having even a little more time to prepare.

There are many reasons why decision makers, whether they be presidents or citizens, choose to ignore or reject intelligence. The varying responses to the pandemic demonstrate that intelligence can fail because intelligence officers got it wrong, but it cannot succeed merely because they got it right.

It would be a mistake to use one model to parse blame for our poor response to the pandemic, given the complex interplay of national, state, local, and individual actors who make up our complicated political system, and the fact that we are still in the midst of the crisis. Such a rush to historical judgment would almost certainly result in more needless loss of life because it would further politicize the response. There will be time for that later, but we need to reflect on the differences in the models above now for the same reason: because we are still in the pandemic and are making life and death decisions every day. Intelligence and expert analysis can only add value in this context if policy makers use it.

 

J. Paul Pope is a Professor of Practice at the LBJ School of Public Affairs. He retired in 2016 after 46 years of service as a soldier and a senior officer in the Central Intelligence Agency.


[1] Thomas C. Schelling's foreword to Pearl Harbor: Warning and Decision, Roberta Wohlstetter, Stanford, CA: Stanford University Press, 1962. Paperback. ISBN 0-8047-0598-4

 

News category:
Feature