Background: Discussion with John Hanley (JH), Director of Strategy for Office of the Director of National Intelligence (ODNI). Prior to joining ODNI, JH served as an officer in the US Navy and held senior positions in the DOD. I asked John broad questions that primarily focused on three topical areas – the difference between analysis and analytic communities within the DOD and Intelligence Community (IC) based on his experiences, his perspectives on the current state of the IC given his role and position within the ODNI, and general comparisons between intelligence analysis and academic scholarship. Note: since the time of this interview on February 9, 2012, Dr. Hanley has retired from the ODNI.
Discussion: I initially asked JH about the differences and similarities between analysis as performed within the DOD/military and the IC given his experiences in both environments. He noted that his background as a submariner was instructive within the development of DOD’s analytic capabilities and the history of Operations Research (OR), which constituted an effort to apply scientific methods (and practicing scientists) to the problems of Anti-Submarine Warfare (ASW). He noted that the ASW community went from having little analytic capability to becoming one of the premiere users of OR methods after a dedicated, and sustained organizational commitment to the problems posed in WWII and the Cold War.
When compared with the IC, particularly the CIA, there was little comparable story with respect to OR. JH attributed this to differences in analytic cultures and noted that the DOD (particularly the analytic and managerial organizations that were legacies of McNamara) was largely a numeric culture while the IC was predominately as literature one. Thus, investments in particular analytic tools and methods that aligned with quantitative and formal modeling, which often found fertile ground in the DOD, were not necessarily long-lived within the IC. JH was quick to note that this was necessarily because IC analysts lacked the ability to work with such approaches, and indeed, he noted that analysts often moved between communities, but that the organizations themselves emphasized different questions and served different customers. Nevertheless, JH believed that the IC could benefit from using more formal analytic methods when problems warranted it. However, JH also noted that DOD had often become overly reliant on formal methods in several cases, particularly large combat simulations, and could often be overconfident in the models and prone to being misled by modeling results if not sufficiently familiar with their inner workings and embedded assumptions.
I asked whether the differences between the DOD and IC affected the roles and responsibilities of managers themselves. JH felt that from an organizational perspective that managers had very similar jobs regardless of the organization, but that differences often existed based on their experiences. Thus, differences could be seen, but they were the result of personal experiences rather than organizational demands.
Because it was clear that a great imbalance existed regarding the use of formal models between the DOD and the IC, I wanted to know if this applied to the larger class of simulations, which also included wargames. JH felt that as exploratory tools, games served a similar role in both communities, but that the DOD had several distinct advantages when dealing with military issues. Specifically, because military questions often involved the development and use of weapons systems, many of the questions raised by a game could subsequently be explored through exercises and concerted data gathering efforts to validate modeling results against real-world outcomes. Likewise, questions about parameters concerning operational or technical variables could also be determined empirically. Thus, a tighter loop and feedback mechanisms could support wargames on the DOD when compared with similar games in the IC. So, even though wargames played similar roles with respect to analysis within the DOD and IC, the character of the games, particularly with respect to their fidelity or ability to be linked with operations or supported by formal models, was greater on the DOD side as a result of combination of culture and resources.
JH’s characterization of the linkages between analytic or exploratory wargames and models with operators prompted me ask about the overall relationship between research and development, analysis, and operations. JH again drew upon his navy experience and discussed how important close relationships between the three really were and that many of the major problems affecting the DOD today were a result of this relationship breaking down and becoming overly rigid and tied to requirements processes that inhibit groups from working together. Specifically, he discussed the development of the AEGIS missile defense system, where R&D and operators worked closely together in an evolutionary process that explored how combinations of technology and operational use could push radar performance past previously anticipated performance limitations, resulting in a system that would have never been envisioned by the requirements process. JH noted that this process was heavy vested in tight feedback loops, the collection of empirical data on operational performance, exploration of the possible (as opposed to the emphasis on the feasible that results from the requirements process), and a minimized role for contractors with the majority of the design, engineering, and testing performed in-house.
JH’s discussion of the AEGIS system denoted the importance of evolutionary approaches to problem solving where prototypes are developed rapidly, and then experimented with in operational contexts. He noted that such an approach was increasingly scarce given the combination of DOD bureaucracy and Congressional oversight however. In such cases, the IC and the classified Special Access Programs (SAPs) that operate with greater secrecy, and therefore less transparency, provide greater opportunities for experimentation and risk taking in accordance with the evolutionary model.
After discussing some of the features of technical development, I focused on questions that were more specific to the IC and the ODNI itself. My first question was about how the IC evaluates the quality of its analysis and analytic products. JH noted that it was a very difficult problem and something that was still in its infancy with respect to developing evaluative criteria beyond common production metrics. He emphasized the recent development of analytic integrity standards, and a greater emphasis on comparing prior analytic forecasts with actual outcomes systematically. JH felt that greater attention needed to be given to methods and analytic approaches, specifically focusing on performing cost-benefit analyses in order to determine whether different techniques are better than others with respect to generating useful insights for consumers. I asked JH about the dangers of such assessments, particularly analytic integrity standards, and he noted that there is always a risk that well-intentioned efforts to guide producers may become check-lists that are employed without consideration of context. JH also noted that the analytic integrity standards could be seen as attempting to cope with the challenges of discerning arguments based on evidence vs. those based on intuition, and that the standards favored or promoted analysis grounded in evidence over those based on logical argumentation. From the perspective of modeling and simulation in the analytic process, such as distinction can be problematic, given that models are products of intuition/theory/simplifications, yet generate data upon which inferences are based, and thus grounded in empirical observations of artificial systems.
Another question I asked JH related to new models of intelligence support to policymakers, such as increasingly considering blue capabilities and actions in assessments of foreign actors, the use of wargames or simulations with policymakers as players, etc. JH noted that there a shift has occurred in the policy world and that expectations are increasingly shaped by social media technologies that position the user/consumer as an interactive participant in the delivery of information. He noted that providers of information are increasingly faced with very rich and fast feedback from users that expect to have influence over the production process—shaping its format and content. As a result, the prospects for using simulations as an analytic tool that allows for intelligence and policymakers to collectively explore complex problems may be rising in importance and effectiveness as a means of producing and delivering intelligence analysis (or perhaps intelligence “experiences”?).
The use of simulations as providers of experience that exercise policymakers and their staffs was also mentioned by Leon Fuerth in another discussion. Leon was concerned with the time required to develop models, simulations, and games that were suitable to the needs policymakers. When faced with the same question, JH acknowledged the concern, but was more optimistic, noting that game and software developers were increasingly employing specialized tools for developing products that significantly cut down development and testing time (indeed, many video games have now provided players with content creation tools for developing their own modifications and content, and these tools require no programming skills). Thus, JH believes that the infrastructure to support modeling and simulation is becoming increasingly streamlined and less complex, and these innovations will reduce the amount of time required to develop models and simulations for use in analysis or with policymakers.
However, JH also observed that the abstract nature of models will always impose some limitations on the extent they can be expected to replicate reality. As a result, games and simulations will be less helpful in crisis situations when policymakers’ concerns are focused on very specific courses of actions and options. Instead, as questions are more strategic and focused on long-term concerns, models and simulations can play a more influential role by helping to define the characteristics or properties of sound strategies, identifying vulnerabilities, or conditions for opportunistic action. On this issue, JH used the phrase “striking while the iron’s cold” to highlight the importance of interacting with policymakers when issues are not urgent or pressing and rational and critical analysis can be influential. Thus, on many strategic issues, analysis must be performed in advance and may even need to sit dormant until the interest in the question/problem is raised.
JH concluded that many of the advancements in tradecraft were being made but that innovations were largely confined to small groups and not encouraged systematically. He repeated his belief that greater attention to assessing tradecraft and evaluating techniques was needed in order to give managers and analysts a better understanding of what works, what doesn’t, and how to employ their resources in the production process.
My final questions were about the similarities and differences between intelligence analysis and scholarship. JH noted that academics tend to be more analytic than synthetic – focusing on the isolation of particular variables or causes in order to examine and explain events rather than aggregating them to obtain a comprehensive understanding. The result is that much of scholarship, even from policy-oriented academics, has been unhelpful to real-world policy and analysts because of their emphasis on studying the “lens” or theory and its applicability to many problems or cases, rather than the particulars of specific problems. This again, was reminiscent of John Lewis Gaddis’s argument about “particular generalization” in which argues that lessons should be drawn from the study of individual cases and examples, rather than envisioning a theory and then attempted to fit multiple cases to it.