Iâ€™ve been spending a lot of time writing model code for my ongoing dissertation and have discovered some interesting and potentially problematic design issues when writing models in NetLogo, one of the most popular platforms for Agent-Based Modeling (ABM).

One of the most important and often underappreciated aspects of ABM is the role of the Random Number Generator (RNG) which plays heavily in the activation regime and other stochastic aspects of any simulation.Â The management of the RNG is done through the assignment of â€œseedsâ€ which initialize a random number stream so that any series of draws, dice rolls, activation orders and more can be replicated if necessary.Â From a simplistic perspective, two simulations started from the same random seed will produce the same results (assuming a single-threaded model).

As experimental designs get more sophisticated, the need to isolate random number draws within the simulation becomes more important.Â For example, one may want to ensure that a randomly created geography is the same, but that the initial populations on it vary from run to run.Â Likewise, one may want to have the attributes of agents identical from run to run, but have their initial positions in geographic space vary.Â Even more challenging designs may require exploring the consequences of an individual agent making different choices, holding all other agents in the system constant (unless they interact with the particular agent in question).Â In each of these cases, the scheme for initializing RNGs must be considered.

NetLogo generally has a single RNG, although there is an extension that can be downloaded for adding additional RNGs with limited functionality.Â In the course of designing my experiments, I started to run into several problems regarding the repeatability of simulations given particular random seeds.Â I noticed that I was turning on and off several mathematical operations, calculating the mean, median, min, max, and standard deviation of agent populations and subpopulations as part of the analytic scheme.Â I found that when I turned certain analytic tools and outputs on and off I got different results despite the fact that everything was held constant.Â After several days of debugging, I eventually, *and falsely concluded*, that NetLogoâ€™s built in mathematical operations were interfering with the RNG.Â I was able to solve this problem by wrapping all of my analytic tools inside NetLogoâ€™s with-local-randomness [] block, which pulls random numbers from the computerâ€™s clock rather the stream controlling the simulation itself.Â More recently,, in attempting to recreate my problems, I realized that the problem was the not the mathematical operations themselves, but the â€œofâ€ command that randomizes the set being examined.

For example:

let _mean.value mean [my.value] of turtles

affects the random number stream because of the way it moves through the set of turtles in making the calculations.Â Alternatively, the code:

with-local-randomness

[let _mean.value mean [my.value] of turtles]

has no affect on the simulationâ€™s RNG.

What Iâ€™ve discovered is that if fine grained experimental control is needed, and there are areas of the code that may not be executed every simulation (graphs and outputs in particular) that may affect the RNG, it is best to wrap them in with-local-randomness [] loops.