Analysis, National Security

Comments on Richard Danzig’s Driving in the Dark

Driving in the Dark

Last month the Center for New American Security (CNAS) published an excellent report by Richard Danzig called Driving in the Dark: Ten Propositions About Prediction and National Security.  While much of what has been said in this report has been discussed elsewhere, I believe that this report provides one of the most coherent, complete, and compact discussions regarding how to cope with inevitable failures of prediction in national security policy.  This posting discusses three interesting points that Danzig makes in the first half of his report.  A follow-up posting will continue by examining the second half of the report.

First, a bit of context: Danzig’s primary emphasis is on the acquisition military equipment because it can take decades to bring new weapons systems into service and, once deployed, will be in use for decades.  Indeed, he largely sees the Department of Defense (DOD) through the lens of long-term issues such as acquisition, the development of new weapons, personnel pipelines, and other issues such as basing and budgeting.  He gives less emphasis to matters of crisis management and response, although it is certainly implied that the department may spend less time managing crises if it gave more thought in how it did long-term planning.  Thus, his examples are largely focused on the acquisition system, and highlights how policy-makers must generate and rely on predictions that illuminate decades of time, and the forecasting of future strategic and operational requirements that can define what new systems are needed and how they will be used.  Of course, this was a problem that has long distinguished Operations Research from Net Assessment and their different intellectual commitments regarding optimality and precision, vs. robustness and adaptability.  Importantly, Danzig’s concerns are not over any particular model and its precision, but regarding the act of postulating a single future, or even a range of potential futures, and then planning against them.  Simply put, there is no a priori way of knowing if the future, or set of futures, being planned for are the right ones, yet decision-makers and organizations must commit resources, take action, and go about their business based on some vision of what is coming.

As noted above, Danzig’s report is divided into two parts: a descriptive or empirical section that lays out features of the decision-making process and the role and challenge of prediction; and a prescriptive or normative section that argues for new approaches to design of the Pentagon’s decision-making apparatus.  Together, a total of ten key points are put forward.

Danzig’s empirical observations are relatively straight forward and should be familiar to anyone who has participated in the DOD’s bureaucracy or has studied national security.  These five observations are:

  • The propensity to make predictions—and to act on the basis of predictions—is inherently human.
  • Requirements for prediction will consistently exceed the ability to predict.
  • The propensity for prediction is especially deeply embedded at the highest levels of the DOD.
  • The unpredictability of long-term national security challenges is an immovable object.  It will repeatedly confound the irresistible forces that drive prediction.
  • Planning across a range of scenarios is good practice but will not prevent predictive failure. (p. 5)

These observations are important, and set the context for Danzig’s normative recommendations.  Those normative points will be discussed in a follow-up posting.  In the meantime, three important aspects of Danzig’s first five points merit attention.

Danzig provides a necessary corrective to the dominant epistemological framework that has undergirded policymaking.  This is an approach that equates policy-making  natural sciences and engineering, which rests upon predictability as a result of regularity, i.e. multiple instances or experiences create the basis upon which generalized, law-like propositions can be exploited that simultaneously allow for the prediction of a system’s behavior and the reliable manipulation of its components.  Danzig challenges this conceptual framing of defense and national security problems by noting that “both the experience of the… DOD… and the social science literature demonstrate that long-term predictions are consistently mistaken.” (p. 5)  Thus, by placing DOD’s problems in the camp of social sciences he has also placed national security problems into the same epistemological framework as other complex social science problems for which prediction has increasingly been seen as beyond the scope of the possible and where the criteria for assessing scientific merit differ from those of the natural sciences.

A second piece of Danzig’s point is his reference to planning across multiple scenarios.  Such an approach has been argued as the basis of creating robust strategies and the reason should be fairly intuitive: betting the house on a single future coming to be is likely to fail, while finding things to do that perform well in a variety of different cases means that one’s strategies will endure (a wonderful discussion and treatment of these strategic concepts, and a supporting methodology for implanting them can be found in this work by Lempert, Popper and Bankes, Amazon, RAND).  Danzig, however, alludes to another aspect of this problem and correctly notes that not all scenarios can be identified beforehand, and some may encourage actions that would be harmful in other cases.  As a result, planning against alternative futures must be supplemented by encouraging adaptation and change as new information becomes available.  Thus, Danzig sets up adaptation as a necessary complement to robustness as a crucial aspect of strategy and decision-making under uncertainty.

The third point that Danzig raises is in his discussion of intelligence.  He notes that policy-makers have an unlimited appetite for information and competitive instincts that fear rivals may something that they do not.  As a result, they push intelligence analysts and organizations into failures by simply demanding more.  I found the following paragraphs important.

Tell a national security adviser that another country is likely to develop a nuclear weapon, and – after all his or her questions have been answered about the basis of the prediction – he or she will want to know when, in what numbers, with what reliability, at what cost, with what ability to deploy them, to mount them on missiles, with what intent as to their use, etc.  It is no wonder that U.S. intelligence agencies are consistently regarded as failing.  Whatever their mixtures of strengths and weaknesses, they are always being pushed to go beyond the point of success.

Put another way, the surest prediction about a credible prediction is that it will induce a request for another prediction.  This tendency is intensified when, as is commonly the case, prediction is competitive.  If you can predict the price of a product but I can predict it faster or more precisely, I gain an economic advantage.  If I can better predict the success of troop movements over difficult terrain, then I gain a military advantage.  As a result, in competitive situations, my fears of your predictive power will drive me to demand more prediction regardless of my predictive power.  Moreover, your recognition of my predictive power will lead you to take steps to impair my predictive ability.

Driving in the Dark, pp. 11-12.

The notion that success breeds failure in the prediction business is an important and often unrecognized aspect of intelligence analysis.  While there has been an extensive and excellent literature of the tensions that exist between the intelligence community and policy-makers (two very recent examinations are Joshua Rovner’s Fixing the Facts, and Paul Pillar’s Intelligence and U.S. Foreign Policy), often resulting in intelligence that is unused, ignored, challenged, or politicized, the ways in which it may be uncritically accepted and then encouraged to go beyond what is knowable while maintaining an appearance of certainty and confidence may be another path to intelligence failure induced by positive feedback from consumers.  This reminds me a line a co-worker of mine once said in regards to a program that became so popular that senior  managers were going to “love it to death,” where the constant attention and desire to promote a successful effort would escalate expectations and demands beyond the point of sustainability.

Obviously, Danzig’s argument is more complex and complete than presented here, but the three points discussed above were those that captured my attention during the first half of his report.  More to follow.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *