It has been a while, but I’m going to return to the Rock-Paper-Scissors (RPS) model that I started developing several weeks ago. This posting looks at the mechanics of of adding geography to the model, while a future posting will actually run the geographic version of the model—this posting is essentially a bridge between model assessments. Having already established that the agent-based framework can replicate the behavior of the equation based model, modifications to the ABM can add new features and relax assumptions embedded in the mathematical formalism, allowing for explorations that are increasingly relevant to the international system.
The previous model matched the dynamics of the mathematical replicator equation for the RPS game. However, this model made several assumptions that will be relaxed through the use of ABM. Here, I demonstrate the addition of geography to the model. However, in order to do so, the underlying agent representation employed in the non-spatial model is adapted. This change is done by introducing a new class of agents into the model called ‘ghosts’ that mark territory owned by a ‘state’ agent. In doing so, they assist in the visualization of the landscape and allow for the use of NetLogo’s underlying geometric algorithms for computing agent distances and neighborhoods.
The core features of the model are depicted below. These include basic setup and go buttons, a slider for the total states in the system, and parameter controls for the weighting assigned to the distance between states and the effects of border length. Finally, controls for adjusting payoffs of the RPS game have been exposed to the interface, although this still assumes the generic rock beats scissors, scissors beats paper, and paper beats rock framework.
The new agent classes in the model (breeds in NetLogo’s terminology) are states and ghosts. Their basic composition is shown below:
- breed
- [
- states
- state
- ]
- ; In the geographic model, the use of ghosts will mark patches owned by states. This allows states to own an
- ; an arbitrary amount of territory, creating irregular neighborhod structures that are more reflective of real
- ; world geopolitics. The ghosts have no behavior, and are strictly tools for visualization and the handling
- ; of geography in the model by capitalizing on NetLogo’s internal geometry.
- breed
- [
- ghosts
- ghost
- ]
- ; Each state possess the attributes below…
- ; Agent variables are denoted by the prefix “my.” so that they are not confused with
- ; temporary variables used in particular methods or global variables
- states-own
- [
- my.id   ; an identification number
- my.rocks  ; a percentage of rocks in their military portfolio
- my.papers  ; a percentage of papers in their military portfolio
- my.scissors   ; a percentage of scissors in their military portfolio
- my.next.rocks  ; the percentage of rocks they will have in the next time step
- my.next.papers  ; the percentage of papers they will have in the next time step
- my.next.scissors  ; the percentage of scissors they will have in the next time step
- my.color  ; the default color of the state assign randomly during setup
- my.ghosts  ; a set of ghosts that they own marking their territory
- my.border.ghosts  ; a set of ghosts that are on the border with other states
- my.distances  ; a set of nearest distances between states determined by their closest ghosts
- my.border.lengths  ; a set of the number of ghosts that reside of border of each state’s neighbors
- my.fitness  ; a calculation of the state’s fitness when measured against the rest of the population
- ]
- ghosts-own
- [
- my.state.id  ; identifies the state that they belong to
- my.border? Â ; identifies whether a ghost borders the ghosts of another state
- ]
Employing a slider, the model can be setup to generate any number of states (see ‘total.states’ above), thus providing some control over the system’s geography. For example, a 30 X 30 landscape appears like any other landscape with each agent claiming a single patch:
However, by adjusting the model’s slider, it is possible to adjust the geography, reducing the number of states while adding increasingly irregularity to the spatial characteristics of the system. A system with 100 states appears below:
A system with 25 states:
And finally a system with 15 states:
With an irregular geography, generated through the random tessellation of states until the entire landscape is full, i.e. there exists a ghost agent on all patches; a reasonable approximation of a land-based international system can be represented. This configuration does not account for natural barriers and oceans (which can connect states from long distances) and therefore any interpretation of the geography should be constrained. It is a useful representation of the effects of neighborhoods on the competitive pressures each state exerts on the others within a narrow geographic and technological regime.
Each state then maintains a table containing its distance from all other states. There are many ways of counting distances, but an effective one is to use the minimum distance between the two patches. For bordering states, the distance will be 1. For states that do not share a border, the distance will be the outcome of a search that compares the border patches of state X with the border patches of state Y, selecting the minimum distance.
Finally, states also maintain a separate list of their border lengths, keeping track of the number of patches they control that border patches controlled by other states. This can allow for state to not only consider the distance between themselves and others in the system, but also give greater weight to those with which they share extensive borders.
When considered in full, the setup of the simulation’s geography and states appears as below:
- to setup.states
- set-default-shape states “square”
- set-default-shape ghosts “square”
- set g.state.id 0
- ask n-of total.states patches
- [
- sprout-states 1
- [
- setup.state
- ]
- ]
- end
- ; For each state being created, assign it an id, a random strategy, and a random color
- to setup.state
- set hidden? true
- set my.id g.state.id
- set g.state.id g.state.id + 1
- set my.rocks 0
- set my.papers 0
- set my.scissors 0
- set my.next.rocks 0
- set my.next.papers 0
- set my.next.scissors 0
- state.set.random.strategy
- let _red random 255
- let _blue random 255
- let _green random 255
- let _colors []
- set _colors fput _green _colors
- set _colors fput _blue _colors
- set _colors fput _red _colors
- set my.color _colors
- set color my.color
- set my.ghosts nobody
- set my.border.ghosts nobody
- set my.distances table:make
- set my.border.lengths table:make
- hatch-ghosts 1
- [
- set color [my.color] of myself
- set my.state.id [my.id] of myself
- set my.border? false
- set hidden? false
- ]
- end
- ; Entering into this routine, each state is represented by a single ghost, leaving, potentially a large amount of empty
- ; space on the landscape. “Setup.geography” will fill the landscape by growing the states and then performing the necessary
- ; calculations for determining their distances to one another and their shared border lengths.
- to setup.geography
- ; first, tessalate the landscape and fill all empty spaces growing out from the established states
- let _viable.patches patches with [not any? ghosts-here and any? neighbors4 with [any? ghosts-here]]
- while [any? patches with [not any? ghosts-here]]
- [
- ask one-of _viable.patches
- [
- let _copy.ghost one-of ghosts-on neighbors4
- sprout-ghosts 1
- [
- set my.state.id [my.state.id] of _copy.ghost
- set my.border? false
- set color [color] of _copy.ghost
- ]
- ]
- set _viable.patches patches with [not any? ghosts-here and any? neighbors4 with [any? ghosts-here]]
- ]
- ; second, bind the ghosts to their states and determine which are border ghosts
- ask states
- [
- set my.ghosts ghosts with [my.state.id = [my.id] of myself]
- ]
- ask states
- [
- ask my.ghosts with [any? (ghosts-on neighbors4) with [my.state.id != [my.state.id] of myself]]
- [
- set my.border? true
- ]
- set my.border.ghosts my.ghosts with [my.border?]
- ]
- ; third, for each state set their distances to be the shortest distance of their ghosts
- ask states
- [
- let _distances []
- let _distance.table my.distances
- let _my.borders my.border.ghosts
- ask states
- [
- set _distances []
- ask my.ghosts
- [
- ask _my.borders
- [
- set _distances fput distance myself _distances
- ]
- ]
- table:put _distance.table my.id min _distances
- ]
- set my.distances _distance.table
- ]
- ; fourth, for each state with distance 1, get the border lengths. All other states have a borer length of 0.
- ; border lengths are scaled for use in later calculations
- ask states
- [
- let _total 0
- foreach table:keys my.distances
- [
- ifelse table:get my.distances ? = 1
- [
- table:put my.border.lengths ? count my.border.ghosts with [any? (ghosts-on neighbors4) with [my.state.id = ?]]
- set _total table:get my.border.lengths ? + _total
- ]
- [
- table:put my.border.lengths ? 0
- ]
- ]
- foreach table:keys my.distances
- [
- table:put my.border.lengths ? ((table:get my.border.lengths ?) / _total)
- ]
- ]
- ; finally, update the colors of ghosts to reflect their colors of the states that own them
- color.update.states
- end
Once setup, the model’s calculations can be adapted to incorporate geography. This is achieved by adding two modifiers to the replicator equation, but in doing so the fundamental structure of the code must be altered.
In the traditional model, each state had an identical probability of interacting, representing a system that was well or perfectly mixed. In the geographic version this is no longer the case. When states are well mixed, the probability of encountering a particular strategy, e.g. rock, is simply the average of each states probability of playing rock. This allowed for very simple and rapid calculation using NetLogo’s basic ‘sum’ or ‘mean’ command that could rapidly add up the total value of a variable for a set of agents. Instead, each agent must now find the mean probability of rock (and papers and scissors) based on their distance to other states. Moreover, these distances must be adjusted by a tunable parameter in order to see how different strengths of local vs. global interactions affect the model’s dynamics. A very simple way of doing so is simply to multiply each rival state’s strategy by (1 / distance) ^ distance weight.
This is simple and easy to calculate, and has the nice property of reducing the entire effect of distance to nothing when the distance multiplier is set to 0. Importantly, it is possible to adjust the setting of the parameter to a negative number (although not on the model interface as currently coded), creating a mathematically plausible, but politically questionable circumstance where states that are farther apart exercise greater influence on another than those that are close together. Perhaps some military technologies could encourage risky behavior as distance increases, thus increasing the concerns over those states for removed by geography, such as a state with a monopoly on ranged weapons that can strike at a distance while remaining invulnerable to retaliation (and thus likely to estimate a lower barrier to the use of coercive threats and violence to those far away).
After adjusting the multipliers for distances between agents, the border lengths are considered. This is a second term multiplied onto the estimation of encountering a rival state. This additional term is simply (1 + border length) ^ border weight. Again, when two states do not share a border, or the border weight is set to zero, the result is no effect on the calculations. When the border weight is positive, the result is an amplification of the sensitivity between neighboring states based on their shared borders.
In software, the modified replicator equation for each state is:
- to state.update.next.strategy
- ; Each state plays the replicator equation based on its unique view of the popualtion.
- ; Agents view the population as if they were invading it, they do not count their own strategy
- ; when evaluating the population as a whole. Thus, each sees the population from a different perspective,
- ; adding the distances between actors adjusted by the distance.weighting adjusts the probability of encountering
- ; and their payoffs
- let _other.rocks sum
- [
- my.rocks *
- ((1 / table:get my.distances [my.id] of myself) ^ distance.weighting)
- * ((1 + table:get my.border.lengths [my.id] of myself) ^ border.weighting)
- ] of other states
- let _other.papers sum
- [
- my.papers *
- ((1 / table:get my.distances [my.id] of myself) ^ distance.weighting)
- * ((1 + table:get my.border.lengths [my.id] of myself) ^ border.weighting)
- ] of other states
- let _other.scissors sum
- [
- my.scissors *
- ((1 / table:get my.distances [my.id] of myself) ^ distance.weighting)
- * ((1 + table:get my.border.lengths [my.id] of myself) ^ border.weighting)
- ] of other states
- let _total.capabilities _other.rocks + _other.papers + _other.scissors
- let _prob.encounter.rock _other.rocks / _total.capabilities
- let _prob.encounter.paper _other.papers / _total.capabilities
- let _prob.encounter.scissors 1.0 – (_prob.encounter.rock + _prob.encounter.paper)
- let _rock.payoff _prob.encounter.rock * g.r.r.payoff + _prob.encounter.paper * g.r.p.payoff +
- _prob.encounter.scissors * g.r.s.payoff
- let _paper.payoff _prob.encounter.rock * g.p.r.payoff + _prob.encounter.paper * g.p.p.payoff +
- _prob.encounter.scissors * g.p.s.payoff
- let _scissor.payoff _prob.encounter.rock * g.s.r.payoff + _prob.encounter.paper * g.s.p.payoff +
- _prob.encounter.scissors * g.s.s.payoff
- let _initial.fitness g.initial.fitness
- if _initial.fitness + _rock.payoff + _paper.payoff + _scissor.payoff <= 0
- [
- set _initial.fitness (_rock.payoff + _paper.payoff + _scissor.payoff – 0.1) * -1
- ]
- let _rock.fitness _initial.fitness + _rock.payoff
- let _paper.fitness _initial.fitness + _paper.payoff
- let _scissor.fitness _initial.fitness + _scissor.payoff
- let _average.fitness _prob.encounter.rock * _rock.fitness + _prob.encounter.paper * _paper.fitness +
- _prob.encounter.scissors * _scissor.fitness
- set my.next.rocks my.rocks * _rock.fitness / _average.fitness
- set my.next.papers my.papers * _paper.fitness / _average.fitness
- set my.next.scissors my.scissors * _scissor.fitness / _average.fitness
- let _total my.next.rocks + my.next.papers + my.next.scissors
- set my.next.rocks my.next.rocks / _total
- set my.next.papers my.next.papers / _total
- set my.next.scissors 1 – (my.next.rocks + my.next.papers)
- end
- to states.set.next.strategy
- ask states
- [
- set my.rocks my.next.rocks
- set my.papers my.next.papers
- set my.scissors my.next.scissors
- ]
- end
Having modified the basic model, its effects will be examined in a forthcoming posting, and the actual NetLogo implementation will be made available as well.