I’ve just participated in a two day workshop in Vienna hosted by the International Institute for Applied Systems Analysis (IIASA) called Security in the age of Systemic Risk: Tactics and Options for Dealing with Femtorisks and Beyond. This intimidating title masked the openness and interdisciplinary character of the workshop, which brought together a variety of scholars and practitioners working in the areas of complexity science, risk analysis, decision sciences, mathematics, and international relations, continuing the conceptual and community development that started in 2008 at the Santa Fe Institute looking at complexity and international relations.
It was very rewarding to spend two days with people thinking through the opportunities and challenges of applying complexity to international relations. Despite the title, complexity played a stronger role in the workshop than I had anticipated, while spend less time on the formal methods of risk analysis.Â This was sensible since the workshopâ€™s goals were largely focused on new ways of defining and identifying risks in complex social systems. The term ‘femtorisks’ was never formally defined, yet had intuitive appeal, capturing the concern for very small actors and events that had the potential to generate cascading effects that could disrupt or destroy complex systems.
Three significant topic areas became the central focus of the discussions:
- The development and maintenance of robustness in systems;
- The challenging nature of policy/intelligence failures;
- The need for new analytic tools and conceptual approaches to the study of international relations and global networks.
Complex systems provided the conceptual centerpiece of the workshop, drawing heavily on the features that provide robustness as a result of evolutionary processes.Â Participants spent a great deal of time thinking through how lessons applied from biological systems can be applied to international relations. A framework for searching for robustness as result of evolutionary processes was provided that focused on the trade-offs between three variables: heterogeneity, redundancy, and modularity.
Robust systems maintain diversity, functionally redundant units or processes, and modular designs with sparse connections that limit the cascades of failures in one part of the system to others.Â Yet, many forces encourage the loss of diversity in systems resulting in increased compatibility, the loss of redundancies in search of efficiency, and increasingly tighter coupling between units, accelerating and extending feedback and the dispersion of information across units.
It was quite interesting to play with these concepts, since it allowed for the complexity of a system to remain constant, all the while its composition could be changing becoming more or less robust. In the case of financial markets, two measures of systemic complexity ran against one another as actors became increasingly homogeneous before the financial crisis, while the internal boundaries designed to decouple the performance of different economic instruments evaporated, exposing each to the moves of other.Â The conservation of complexity may occur, where the loss of heterogeneity (one measure of complexity) was offset by increases in connectivity (another measure of complexity).Â This suggested, at least to me, that composite measures of complexity may be needed that can identify and characterize trade-offs in complex design spaces, rather than employ simpler, one-dimensional measures of system complexity based on one of the three dimensions, e.g. diversity vs. disparity in ecology, or graph theoretic measures of network structure.
Given my own interests in intelligence and policy-analysis, I was initially surprised how quickly and persistently the issues of intelligence failure came to the forefront of the conversation.Â There was a lingering concern over the inability to predict and foresee events that in retrospect were obvious.Â While this discussion was not focused exclusively on intelligence, the general discussion tracked with the literature of intelligence failure which is extensive.Â Three generalized sources of failure were identified by the group: a failure to orient the collection and analysis of information on the right variables or issues, e.g. overly emphasizing elites and nation states, thus missing micro-level trends and population based trends and dynamics until they are too late; the failure of decision-makers to use the information available to them, ignoring assessments and analysis that does not conform to their world-views; the fundamental nature of complex systems where epistemological exists that make prediction a scientific impossibility.
The final major topic of discussion was the need for new analytic tools and measures.Â Bolstering the topic of robustness in systems, there was a desire for new measurements of the systemâ€™s complexity, and its potential for significant change, shifting between alternative qualitative regions or modes of behavior.Â What emerged was a clear need to challenge classical analysis where systems are decomposed into individual units, studied in isolation, and then recomposed.Â Replacing this approach, however, remains difficult in a formal sense.Â The discussion of analytic approaches also highlighted the differences between empirical approaches and normative needs.Â It is one thing to create a basis for measuring the complexity of a systems, and implicitly, its exposure to risks, and quite another to have the techniques for searching for interventions, estimating their consequences, and determining if they can be implemented.