Analysis, Conferences, International Relations, Modeling, National Security, Science

Observations on Quantitative Modeling in Defense and Intelligence Analysis

Over the last couple of weeks I had the opportunity to participate in a two conferences that focused on the role of formal modeling in intelligence and defense analysis.  The preparation for these events kept me away from the blog, and I’m hoping to have a chance to write more as the majority of my time and attention return to my dissertation for the next several months.

The conferences were quite informative, both for serving as a platform to articulate many of the arguments developed earlier on this blog (specifically in posts on using models and empirical limits), and to talk with others who are working on similar problems who have their own unique experiences.  From a methodological perspective, it was quite interesting to see how organizational matters affect the selection and use of analytic tools or approaches.

It is important to keep in mind that the intelligence community has always struggled with how independent it should be from the policy-makers they support, making the so-called producer/consumer relationship among the most difficult, challenging, and interesting aspect of intelligence studies.  By comparison, analyses performed in-house within policy-minded organizations, for example by DOD staff, are performed by people who work directly for decision-makers (even if they reside in different sub-organizations or components of the same institution).  The result is that formal methods, particularly those that rely on what are normally regarded as “scientific” approaches have been received differently by these communities.

Based on the discussions that I observed, I noticed an interesting paradox.  The normal assumption is that analysts who work directly for policy-makers (or rather policy-making organizations) would feel compelled to produce studies whose findings are consistent with policy-makers’ desires.  Indeed, this has been a common critique of the analysis performed within the DOD regarding Iraq’s WMD capabilities and ties to Al Qaeda during the run-up to the 2003 Iraq War (for examples see Betts,Jervis, Rovner, and Pillar).  However, a slightly different story emerged during the conferences.  Because in-house analysts are assumed to be working towards the same goals of their bosses, they are much freer to pursue an “objective” or “scientific” approach to studying problems.  While such terms are loaded, these terms largely align analytic methods with tools and processes familiar to the academic community, particularly with respect to the use of formal models and inductive methods that are predominantly data driven (and often falsely perceived to be free of theoretical baggage or assumptions).  While they acknowledge that a model should not trump human expertise, there is a greater likelihood that a formal model, as an independent artifact, will be afforded a voice and representation in the analytic production process, even when its assumptions may be regarded as foreign or counterintuitive by the policymakers they serve.

By comparison, the independent standing of intelligence analysts means that they are more sensitive to producing assessments that are deemed relevant by their consumers, and this search for relevance plays heavily on the selection of analytic methods and approaches.  Because intelligence analysts are employed independently of the policy-makers they support, they are essentially outsiders and their commitment towards the achievement of policy-makers’ goals are not always assumed by consumers.  Thus, intelligence analysts often go to great pains to start any analysis from the point of view or perspective of their consumers in an effort to simultaneously establish the relevance of their analysis and the bona fides of their personal and organizational intentions.  After these are established, analytic methods and frames may expand to incorporate a wider range of perspectives, but starting from assumptions or methods that are alien and counterintuitive to their consumers may deny their participation in the policy-making process, rendering them ineffective and irrelevant.

The fact that consumers are free to disregard intelligence assessments produces the paradoxical result of greater concern for relevance on the part of independent intelligence analysts because there is little institutional or organizational demand for their work to be considered.  The result is that formal methods have a difficult time growing roots in the intelligence community when compared with their policy analysis cousins, because of the need to start analysis from the perspective and interests of consumers, which may be riddled with internal contradictions, bias, wishful assumptions, and strongly held philosophical or ideological beliefs and unspoken goals (including domestic political objectives that are out of bounds for analysts to address).  Instead, intelligence analysts seemed to start their analysis from a subjective in orientation, first and foremost focused on revealing the implications of particular assumptions held by (or believed to be held by consumers), and eventually expanding to incorporate an increasingly broad range of assumptions and perspectives once they demonstrate they are operating in good faith, in order to produce a more comprehensive assessment.

From my perspective, it became clear that there are a series of “difficult conversations” that must be held amongst analysts, managers, and policy-makers regarding many of the meta-level aspects of analysis and decision-making.  For example, terms like “scientific” and “objective” are loaded and can be wielded as weapons, as can “validation” and “prediction.”  What seems clear is that there is a need to discuss precisely what it means to build a model, whether deductively or inductively, and what it means to use a formal model in analytic processes.  There is often an intellectually lazy assumption lurking in the background of discussions that implies models, as artifacts of data and theory, are somehow objective, while human analysts and decision-makers are biased.  Yet, models are really products of human beliefs and expertise, externalized and frozen in equations, computer programs, graphics, tables, physical replications, etc.  Thus a model contains assumptions about the accuracy and relevance of the data used in it, or the quality and veracity of a selected behavioral rule, functional form, etc.  Indeed, while the conferences were quite interested in quantitative analysis, it seemed to me that quantitative tools and methods should be used to provide context-highlighting aspects of problems where important and relevant insights can be seen that have qualitative impact.  This means moving away from the subtleties of optimization (min/max/equilibrium outcomes) that depend on achieving the most desirable result from the perspective of a given model, and extending the search across alternative models with an emphasis on identifying the tradeoff space associated with alternative frameworks or analytic contexts, accepting that no single frame dominates across organizations, the interagency, or even coalitions, alliances, and the international community.

Although calls for robustness have been made in the past, it is increasingly clear and recognized by practitioners that many of the assumptions that militate against its adoption are deeply embedded in the culture of operations research and systems analysis, with its heavy emphasis on microeconomic, rational decision-making, conversion of uncertainty into risk, and peculiar view and treatment of history and science may be working against the current needs of the community.  I found references to Karl Popper and his argument that all science is hypothesis testing amusing, given that Popper himself eventually backed away from this criteria realizing that so much of science did not, and could not fit this strict criteria.  Indeed, I have a suspicion that if all analysis was performed according to the dominant interpretation of science as discussed by part of the community, than no work could be done — it would simply be impossible to proceed in strict accordance with the ideal standard.  As I read more of the philosophy of science, I’m surprised by how much Popper has receded in influence, particularly in the social sciences in favor of more forgiving criteria or definitions of what it means to be “scientific.”

Increasingly, national security issues can’t be constrained by assumptions that were acceptable in the past, and as they are challenged, the tools and techniques are proving to be more difficult to apply absent significant caveats.  None of this should doom quantitative analysis or formal modeling.  Rather it should surround such analyses with accompanying considerations of how problems are structured or framed in order to establish the epistemological limitations of what can  be concluded from a single formal model, an ensemble of many models, and a complete collection of mixed methods studies that collectively provide insight.  I think we’re unlikely to see any studies or analysis from the policy or intelligence community that can firmly rely on a single approach given the complexity of the problems that national security community faces, the many different ways of reducing or simplifying problems analytically, and the diversity of initial assumptions and expectations of decision-makers that shade their receptivity to different approaches and the sequence by which new, often challenging, perspectives are arrived at and presented to them.

 

Leave a Reply

Your email address will not be published. Required fields are marked *