I got into modeling and simulation after being trained in strategic studies and intelligence analysis. When I was first exposed to Agent-Based Modeling, I recognized it as an opportunity to extrapolate from the particular features of individual case studies, allow for the rerunning of history or make comparisons between alternative competing historical accounts. Having been raised to be skeptical of large-N approaches to political and strategic problems and the construction of quantitative measures that often obscure more than they reveal, I’ve never been fully comfortable with my more quantitative peers that tend to dominate the ranks of professional modeling, simulation, and applied methodology. Recently, I’ve begun to invest my time in reading about the philosophy of science and I am highly encouraged and dismayed at the same time.
My encouragement comes from the fact that there exists a vigorous debate within the scientific community over the limitations of observation, the claims of deduction, the strengths of induction, alternative competing theories, and so on. While none of this is surprising, finding the body of literature that organizes issues and history that undergird so much of model use and inference is really quite helpful. Having been through many sessions of debates on model validation, I think the philosophy of science is a more fruitful set of discussions to have between modelers and model users and anyone else who is looking for insights into decision-making based science, however the choose to define it.
My dismay comes from that the fact that so many essential issues have not been part of my prior professional training or academic training. I’ve spent a significant amount of time, nearly two decades, thinking about the prospects and challenges of a theory of intelligence (in the political/military/business sense, and not of the brain itself) and yet have never seen the intelligence community return to the philosophy of science despite the fact that they wrestle with the same problems and dilemmas of evidence and inference. The professional literature certainly draws on these scientific concerns, but does not explicitly ground the practice in terms of treating the discipline as a type of scientific practice that supports decision-making. While intelligence methodologists, analysts, and managers have invested heavily in the problems of individual cognition and organizational behavior, more fundamental questions regarding epistemology are rarely run to ground by linking professional and philosophical bodies of literature, terminology, or histories.
In my opinion, these are important issues for analysts and modelers to consider, and really lie behind the validation trap that often causes modeling projects to stall out at the very moment when they are needed to help inform decision-makers. Essentially, after building a model to study a particular problem and project out the consequences of alternative choices, the entire philosophy of science comes crashing down as a set of questions about the generalizability of the model’s structure or results, its ability to explain or predict particular cases, whether it is the most parsimonious or descriptive representation of the problem, etc. What fascinates me about the limited reading I’ve done so far is that there are strong debates about the roles of deduction and induction in science, and the concerns of observation and detection, and more that all address the limitations of our experiences and what can be known empirically, as well as the limitations of models in being able to match our observations and potential observations of the empirical world.
Yet, there is a gap in the philosophy of science because the assumption is that science is seeking to explain and understand ‘what is,’ while policymakers are consistently seeking to choose between ‘what could be.’ The subtleness means that world of policy and modeling and simulation within organizational processes are fundamentally about counterfactuals and agency – the alternatives that result from different choices or combinations of choices. Naturally, there are empirical limitations to policy studies because we cannot observe and compare those worlds in which alternative choices are made, and social systems of are sufficient complexity that we cannot even be sure that the choices we make actually determine the outcomes we observe (we might attribute the successful deterrence of a Soviet nuclear missile attack to US deterrence strategies, but we cannot know the extent to which other’s choices were factors). While such problems are readily acknowledged, coping with them in any rigorous way is rarely discussed. It seems interesting to me that so much of the philosophy of science is conducted via thought experiments, counterfactuals, and mental simulations, and yet I haven’t yet seen a lot of consideration of these methods themselves. I’m hoping that my continued readings will find a well-developed literature but as of yet I haven’t hit yet.