I’ve been spending a lot of time writing model code for my ongoing dissertation and have discovered some interesting and potentially problematic design issues when writing models in NetLogo, one of the most popular platforms for Agent-Based Modeling (ABM).
One of the most important and often underappreciated aspects of ABM is the role of the Random Number Generator (RNG) which plays heavily in the activation regime and other stochastic aspects of any simulation. The management of the RNG is done through the assignment of “seeds†which initialize a random number stream so that any series of draws, dice rolls, activation orders and more can be replicated if necessary. From a simplistic perspective, two simulations started from the same random seed will produce the same results (assuming a single-threaded model).
As experimental designs get more sophisticated, the need to isolate random number draws within the simulation becomes more important. For example, one may want to ensure that a randomly created geography is the same, but that the initial populations on it vary from run to run. Likewise, one may want to have the attributes of agents identical from run to run, but have their initial positions in geographic space vary. Even more challenging designs may require exploring the consequences of an individual agent making different choices, holding all other agents in the system constant (unless they interact with the particular agent in question). In each of these cases, the scheme for initializing RNGs must be considered.
NetLogo generally has a single RNG, although there is an extension that can be downloaded for adding additional RNGs with limited functionality. In the course of designing my experiments, I started to run into several problems regarding the repeatability of simulations given particular random seeds. I noticed that I was turning on and off several mathematical operations, calculating the mean, median, min, max, and standard deviation of agent populations and subpopulations as part of the analytic scheme. I found that when I turned certain analytic tools and outputs on and off I got different results despite the fact that everything was held constant. After several days of debugging, I eventually, and falsely concluded, that NetLogo’s built in mathematical operations were interfering with the RNG. I was able to solve this problem by wrapping all of my analytic tools inside NetLogo’s with-local-randomness [] block, which pulls random numbers from the computer’s clock rather the stream controlling the simulation itself. More recently,, in attempting to recreate my problems, I realized that the problem was the not the mathematical operations themselves, but the “of†command that randomizes the set being examined.
For example:
let _mean.value mean [my.value] of turtles
affects the random number stream because of the way it moves through the set of turtles in making the calculations. Alternatively, the code:
with-local-randomness
[let _mean.value mean [my.value] of turtles]
has no affect on the simulation’s RNG.
What I’ve discovered is that if fine grained experimental control is needed, and there are areas of the code that may not be executed every simulation (graphs and outputs in particular) that may affect the RNG, it is best to wrap them in with-local-randomness [] loops.