In preparing for the International Studies Association Annual Conference, I’ve been thinking about Agent-Based Models (ABM) and how they are employed in thinking about policy and decision-making. My assessment is that that community has only scratched the surface of the possible with respect to modeling, and is currently employing ABM in ways that don’t really accomplish the research goals or potential for exploring micro-macro linkages to the extent possible and necessary.
The rhetoric around ABM is quite powerful and I believe convincing with respect to the need and capability to explore how individual actions and choices generate macroscopic, system level outcomes. I think any critical examination of other formal modeling approaches reveals holes in the knowledge they provide because they wash away processes and details from disaggregated representations of systems. Yet, ABMs are rarely used to search for, and expose, the pivotal choice or choices that maximize (or minimize) downstream effects in a system.
Figuring out how to isolate individual actions in ABMs, and explore deviations around particular paths, requires new thinking about experimental design and, quite often, problem representation itself. I’ve started to play with this idea somewhat, using a simple Schelling segregation model as a test case for exploring how much variation is possible if every agent but one follows a set path for decision-making.
It turns out, just defining what that means in not trivial. For example, the ABM is easily decomposed into the agents that make decisions about where to move (a randomly selected empty patch) and when to move (determined by the activation regime). Does being free to deviate from the script from run-to-run narrowly mean that an agent might select a different random patch when it is their time to move? Does it mean more broadly that agents try to move to the same locations but do so in a different order? Somewhere between agent decision-making, action, and activation regime structure and agency start to blend.
In order to gain experimental control over this, I’ve designed a model where each agent is given its own Random Number Generator (RNG) that can be used as the basis for its movement choices. Within NetLogo’s modeling environment, the “ask” and “one-of” commands for dealing with sets cannot be used because they use a centralized RNG that would disrupt all activities on the same stream. Thus, if one agent employed the main RNG, the disturbance would then affect all other agents that used the RNG afterwards. Thus, by assigning each agent its own RNG, they can make their choices without affecting the choices of other agents (unless they interact by trying to share a resource that cannot be shared, such as attempting to move to the same patch).
It turns out that problem representation becomes rather important in this scheme, as the simple problem of picking a random patch can have dramatically different effects based on how that task is accomplished. By seeding each agent’s RNG from a common stream (set at the beginning of the simulation using a fixed value from the main RNG) each agent can be assured to have the same RNG stream upon initialization. This, however, is not enough to minimize agent interactions and make the agents independent. Consider the movement rule where all empty patches are organized and sorted into a list, and there are always 70 empty patches. If agent 7 draws number 5 at random, and then takes the 5th index on the list (6th item) then all items in the list from index 5 onwards will be different values. Thus, the next agent that goes to move by drawing item 10, will be affected by agent 7’s choice. This, however, turns out to be an artificial interaction, because the patch that agent 10 could be moving to in a “default” run might still be available given agent 7’s earlier movement. Because the agents are interacting with a shifted list, but not one another, the extent to which their choices are affecting one another are artificially high.
To solve this problem, I’ve assigned each patch on the landscape a unique id. Moving agents randomly draw a patch id, and as long as that patch with that id is empty, they move there. Otherwise, the agent will keep drawing random ids until it gets one from an empty patch. This means that agents only alter their movement based on actual collisions on the landscape, i.e. when one agent tries to move to a patch that is already occupied.
I’ve only started to play out what this really means with respect to experimental design. I suspect that this problem of artificial vs. intended interactions occurs with great frequency, particularly when efforts to represent parts of the system efficiently in computational terms have been taken. Thus, the kinds of experiments I’m thinking about here will likely require changing the ways models are developed at the software level and probably reduce opportunities to “cheat” certain processes in order to improve software performance (like using a master list of some shared resource rather than have each agent maintain its own list).
I’ve gotten a simple model built and am now playing around with experimental designs. Immediate tests are focused on establishing a single default case, and then one-by-one allowing one agent to be “free”—have a different stream from run-to-run—and see how different the landscape is. This offers an opportunity to answer the question of “just how much do the choices of a single agent matter?” It turns out, that it depends. Some agents are “born happy” being located in neighborhoods that satisfy them so that they never move. When that occurs, quite simply, they have no choices to make and never affect the system. Other agents may move to new locations, and then everything is disrupted from their downstream effects. Nevertheless, there are some patterns that I think can be seen visually, but need to be studied rigorously. For example, do certain parts of the landscape tend to support clusters of one group or another? The problem that I’m anticipating is that to really understand the full extent of an agent’s choices, I’d like to do more runs than possible outcomes for a given set of conditions, but that will likely be unfeasible. Likewise, it is one thing to alter an agent’s choices from t = 0, but quite another to isolate the decision made at t = 10 as critical is a totally new experimental design.
I believe that the technology of ABM can support these endeavors, but doing so effectively will require new ways of designing models and thinking about experiments, and most importantly capturing and analyzing data since the volume of output will be massive—my initial parameter sweep of 1200 agents run 200 times for each free agent, produced 240,000 runs and more than 10 gb data only saving off the end state of each simulation and not the preceding rounds. I’m hoping to start posting more about this problem, and the model code over the next month.