This post provides another summary from an interview I conducted as part of my dissertation research and is included in the Appendix of the final,now completed version. Â I have known Dr. Bruce since 1998 when I was a student of his at Georgetown.
Background: James Bruce (JB) spent 24 years in the intelligence community, serving in senior positions in the National Intelligence Council, the Silberman-Robb WMD Commission, and the CIA’s Sherman Kent School. After retiring in from the CIA in 2005, he joined RAND as a Senior Political Scientist. He is also an adjunct professor at Georgetown University’s Security Studies Program, and is coeditor, with Roger George, of Analyzing Intelligence: Origins, Obstacles, and Innovations published by Georgetown University Press.
Discussion:
On August 13, 2012 I met with JB to discuss intelligence analysis, the use of modeling and simulation, and many of the underlying challenges of analytic tradecraft and production from the perspective of the philosophy of science. Afterwards, our dialogue continued via email, introducing new topics and refining points made previously. Because of the wide ranging character of our continuing discussion, and the fact that it is a two-way conversation rather than an interview, the summary below touches upon four of the most interesting topics that we discussed rather than a detailed reconstruction of entire discourse.
The first topic that we discussed concerned the characteristics of useful intelligence analysis and whether the practice of intelligence analysis could be consistent with science. JB argued that useful analytic products offer consumers judgments, forecasts and insights, each of which constitutes a different type of information. For example, the intellectual justification for reaching a judgment about the capabilities or intentions of a foreign missile program may be quite different than forecasts regarding how scientific discoveries and engineering applications might produce strategic consequences that alter the balance of power in the international system. During our conversation, the development of forecasts and insight received the bulk of our attention due to the difficulties associated with each when compared with providing consumers with judgments. Indeed, judgments were relatively straight forward to identify and produce, although determining whether they are based on sound evidence, logical reasoning, and are robust or fragile in light of uncertainties are a continuous problem.
JB argued that at the foundation of intelligence production were two major scientific challenges—the generation of hypotheses and their testing. His treatment of intelligence as a scientific practice was consequential in two ways. First, he challenged the treatment of intelligence as a strictly artistic or intuitive act, by holding analysts to the procedural standards of the scientific method as characterized by scientific positivism. Second, JB’s inclusion of hypothesis generation into the scientific method marks an important difference between his thinking and the work of others that have advocated scientific approaches to intelligence analysis, e.g. Isaac Ben-Israel who argued that where theories came from was outside of the boundaries of science or scientific consideration. By including how hypotheses are generated, JB placed the act of theorization itself into the scope of what can be treated and studied systematically, and exposed the development of application of analysts’ mindsets or mental models to evaluation with respect to rigor and logical coherence. This is of great importance from the perspective of modeling and simulation, because it introduced opportunities to employ machine learning, evolutionary computation and other approaches that focus on the discovery of new models and relations between variables and actors, rather than limit the role of models to hypothesis testing alone. Indeed, even without the heavy use of computational tools, the very act of model development and design often serve an important role in theory development and the generation of hypotheses that guide future testing in scientific or analytic practice.
Our second topic flowed logically from the first. By treating analysis as a type of scientific hypothesis testing, JB noted that the several of the long-standing challenges in the philosophy of science emerge and affect intelligence analysis. Specifically, he noted the important differences between nomothetic and ideographic approaches to science, history and analysis. JB noted that in the former, empirical cases are taken to be instances of a general phenomenon that can be studied and tested. Alternatively, the latter sees history as a single path through a dynamic system, where each case is unique and embedded in a context determined by prior experiences and perceptions of earlier events. In the ideographic case, the empirical record is no longer a collection of independent events that provide the basis for general claims, but a particular trajectory that may or may not be indicative of repeatable outcomes, e.g. if the Cold War between the US and Soviet Union were replayed 1,000 times, how many different ways would have it concluded and with what frequencies? Our conversation noted that each approach has important implications for what it means to test hypotheses and how to test them. The nomothetic approach is consistent with the dominant approaches found in the social sciences, e.g. Large-N statistical and Small-N case study approaches, while the idiosyncratic approach is more consistent with history, ecology, evolutionary biology, and other complex systems analysis. We discussed how simulation might provide opportunities to bridge gaps between the two approaches, giving analysts and consumers greater confidence in assessments. I noted that Agent-Based Modeling, specifically because of its ability to capture and represent microlevel differences between cases, may provide a powerful tool for differentiating between those outcomes that are structurally determined or based on variables that are recurrent across cases (nomothetic), and those outcomes that are contingent and dependent on the specific or unique features that make one situation unique from others (idiosyncratic).
A third topic of discussion was on the characteristics of knowledge and foreknowledge about the world. JB noted that intelligence is often defined as foreknowledge about the world. He started with a definition of knowledge that he found particularly helpful in his own thinking, where knowledge was defined as justified, true belief. We discussed the implications of this definition, particularly as it applied to intelligence given the treatment of intelligence as foreknowledge about the world and therefore constrained by what is knowable a priori. Importantly, the boundaries between what is epistemologically knowable or not is contingent upon whether one views the world in nomothetic terms or idiosyncratic ones because each implies a different belief about the repeatability of events and the reliability of patterns across cases.
Additionally, we discussed how in many cases a notable gap existed between what could be known beforehand as a practical matter and necessary role of decisions and actions beyond those of the intelligence community in determining the truth about assessments. For example, the foreknowledge of Osama Bin Laden’s (OBL’s) location in Pakistan was justified based on intelligence observations of the Abbottabad compound and believed by analysts and policy makers in some probabilistic sense (each person who knew about the compound may have believed the identity of its mysterious resident was OBL with a different level of confidence), but the truth regarding his location could only be determined by physically raiding his compound and determining the identity of the occupants. Thus, the intelligence community may not always be capable of determining the truth of its assessments and may require policy makers to commit resources and authorities to operations in order to ultimately validate the truthfulness of intelligence assessments.
Our fourth topic gave additional consideration to the problem of hypothesis testing in intelligence analysis. JB noted that the means for hypothesis testing when dealing with quantitative questions are well established and often quite accurate. Thus, on quantitative matters analysts may have sufficient opportunities to develop knowledge (again defined as justified true belief) based on the maturity of scientific hypothesis testing particularly through the use of statistical inference. However, JB noted that the situation was far less developed for testing qualitative hypotheses, which constitute the bulk of intelligence questions. Thus, the strength of justifications provided for many beliefs about intelligence targets may not be knowable given the inability to adequately test qualitative hypotheses. JB noted that qualitative assessments continue to pose three important challenges.
First, precise characterizations of uncertainty regarding the qualitative assessments remain problematic. While analytic tradecraft emphasizes providing consumers with critical uncertainties that would affect analytic judgments, precisely how to do so regarding qualitative matters remains an unsolved scientific problem. Second, JB noted that in many ways, intelligence analysis can be reduced to a series of logical sentences that are either true or false.  This approach would seem to owe its origins to logic, philosophy and linguistics, and I do not know if intelligence analysis has been evaluated in this fashion before. I suspect that the closest approaches to this sort of logical construction of intelligence judgments would be the employment of Bayesian techniques to construct assessments out of chains of conditional probabilities. Finally, JB noted that mapping the space between facts and judgments remained a considerable philosophical problem. He argued that the structure of this space was important for determining where knowledge resides and how it is constructed out of the many source and methods available to analysts?