This post extends the mathematical model of the RockPaperScissors (RPS) game analyzed in earlier postings (here and here).Â In order to overcome the limitations of the mathematical model, an AgentBasedModel (ABM) was developed that contains the essential elements and assumptions of the original model.Â However, by changing the formalism, new opportunities to extend the model become possible.
The ABM presented in this posting is simple, and simply a first step â€“ a validation test to see if the transition from a mathematical equationbased model to an ABM can produce the same results in quantitative and qualitative form.Â If this is the case, then additional extensions to the model, such as the introduction of geography, uncertainty, etc. can all be understood as producing consequential effects rather than computational artifacts.
Key features of the mathematical model that undergird the initial ABM are:
 Perfect mixing: each agent has an equal probability of interacting with other agents.Â This means that features such as geography are rendered moot as the agents reside in a sort of featureless â€œsoupâ€ or on a complete graph where all agents are connected to all other agents.
 Infinite populations: the mathematical model operates by breaking the population into an infinitely sized mass of actors characterized by their relative proportions.Â From an ABM perspective, this is potentially problematic because agents are discrete entities.Â For example, if there 100 agents in a system, there cannot be a population where 0.5 of an agent or 0.005% employs the rock strategy â€“ there can only be 1 or 0 rocks.Â The alternative approach is for agents to employ mixed strategies, where each agent can play a mix of rocks, papers, and scissors, meaning that each possesses a portfolio that can be infinitely divided on a continuous range between [0, 1.0].
 Homogeneous and perfect information: the mathematical model assumes that each agent has the same view of the population and the proportions of the different strategies found within it.Â Because the original model governed the dynamics of the entire population, it was agnostic as to what the agents within it actually knew about themselves and others.Â The conversion to an ABM requires identifying what information the agents would possess if they were to behave according to the dictates of the equation originally intended to characterize the population.Â In the case of the replicator model, agents know the existing proportions of rock, paper, and scissors within the population, as well the relative payoffs and fitness of each type, allowing them to adapt their strategies according to the precepts of the replicator equation.Â However, an important difference between the ABM and the mathematical model is that in the ABM, each agent can possess a distinct, unique view of the population, meaning that each agent examines the population of agents without including itself.Â Thus, each may have a slightly different perspective on the population depending on its size and the uniqueness of the agentâ€™s individual strategy.
 Identical payoffs: the mathematical model carries two assumptions, one of which is easily changed mathematically, while the other is not.Â Simple versions of the mathematical model assign the same payoffs to win, lose, or draw.Â A more sophisticated version of the payoffs may treat wins, loses, and draws differently depending on their kinds of strategies that are interacting.Â Thus, a draw between rocks may have a different payoff than a draw between scissors, etc.Â A more difficult challenge to the mathematical model is the dependence on the fact that all agents are treated as homogeneous in all regards other than their individual strategies.Â If an agent loses based on its strategy and that of its opponents, itÂ doesn’tÂ matter who it loses to.Â Thus, losing to a large powerful agent has the same payoff as losing to a small weak agent.
The first step of the ABM development is to code the mathematical model as a global model, where the replicator equation is run on the entire population.Â Because it is possible that an agent based approach may produce different results than the mathematical model, after the initial population is constructed the mathematical model will run separately, and not refer to the state of the agents themselves.Â Thus, the computational model contains two models running in a parallel â€“ the mathematical model running the replicator equation based on the initial payoffs and population structure and an ABM where each agent is adapting itâ€™s individual strategy based on logic of the replicator equation and its unique view of the other agents.
The following implementation details are provided as NetLogo code.Â Critical syntax issues that the reader should be familiar with are:
 â€œlet â€œ creates a variable and then assigns a value to that variable
 â€œsetâ€ assigns a new value to an existing variable, where the variable is first named and is then assigned a new value
 â€œ[]â€ together denote the creation of a list, similar to an array (although not identical as a data structure)
 â€œ[ â€¦ ]â€ denotes a block of code for a subroutine, conditional, or recursion
More detailed on NetLogo and its syntax can be found here.Â The implemented model can be found here or on the models page of this blog.
The global, model appears as follows (note the prefix â€œg.â€ is used to denote a global variable in the simulation code, so as not to confuse it with a local or agent variable):
 g.mathematical.percent.rocksÂ Â Â Â Â Â Â ; the percentage of rocks the mathematical model predicts given the initial conditions
 g.mathematical.percent.papersÂ Â Â Â Â Â ; the percentage of papers the mathematical model predicts given the initial conditions
 g.mathematical.percent.scissorsÂ Â Â Â ; the percentage of scissors the mathematical model predicts given the initial conditions
 g.r.r.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing rock against rock
 g.r.p.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing rock against paper
 g.r.s.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing rock against scissors
 g.p.r.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing paper against rock
 g.p.p.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing paper against paper
 g.p.s.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing paper against scissors
 g.s.r.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing scissors against rock
 g.s.p.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing scissors against paper
 g.s.s.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the payoff for playing scissors against scissors
 g.rock.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the estimated value of playing the rock strategy given aÂ populationÂ of opponents
 g.paper.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the estimated value of playing the paper strategy given a population of opponents
 g.scissor.payoffÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the estimated value of playing the scissors strategy given a population of opponents
 g.initial.fitnessÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the initial fitness of a state prior to modification based on it strategy
 g.average.fitnessÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the average fitness of states based on their strategies
 g.rock.fitnessÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the computed fitness of states that play the rock strategy
 g.paper.fitnessÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the computed fitness of states that play the paper strategy
 g.scissor.fitnessÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â ; the computed fitness of states that play the scissors strategy
In the equation model, each time step of the simulation will make the following computation at the global level:
 set g.rock.payoff g.mathematical.percent.rocks * g.r.r.payoff + g.mathematical.percent.papers * g.r.p.payoff +Â g.mathematical.percent.scissors * g.r.s.payoff
 set g.paper.payoff g.mathematical.percent.rocks * g.p.r.payoff + g.mathematical.percent.papers * g.p.p.payoff +Â g.mathematical.percent.scissors * g.p.s.payoff
 set g.scissor.payoff g.mathematical.percent.rocks * g.s.r.payoff + g.mathematical.percent.papers * g.s.p.payoff +Â g.mathematical.percent.scissors * g.s.s.payoff
 set g.rock.fitness g.initial.fitness + g.rock.payoff
 set g.paper.fitness g.initial.fitness + g.paper.payoff
 set g.scissor.fitness g.initial.fitness + g.scissor.payoff
 if g.rock.fitness <= 0 or g.paper.fitness <= 0 or g.scissor.fitness <= 0
 [

 let _fitnesses []

 set _fitnesses fput g.scissor.fitness _fitnesses

 set _fitnesses fput g.paper.fitness _fitnesses

 set _fitnesses fput g.rock.fitness _fitnesses

 let _min min _fitnesses

 set g.initial.fitness ((_min * 1) + 0.1)

 set g.rock.fitness g.initial.fitness + g.rock.payoff

 set g.paper.fitness g.initial.fitness + g.paper.payoff

 set g.scissor.fitness g.initial.fitness + g.scissor.payoff
 ]
 set g.average.fitness g.mathematical.percent.rocks * g.rock.fitness + g.mathematical.percent.papers * g.paper.fitness +Â g.mathematical.percent.scissors * g.scissor.fitness
 set g.mathematical.percent.rocks g.mathematical.percent.rocks * g.rock.fitness / g.average.fitness
 set g.mathematical.percent.papers g.mathematical.percent.papers * g.paper.fitness / g.average.fitness
 set g.mathematical.percent.scissors g.mathematical.percent.scissors * g.scissor.fitness / g.average.fitness
In each time step, the mathematical.percent.rocks, papers, and scissors update according to the mathematical model.Â This establishes a baseline against which the performance of the agent model can be compared.
The ABM running in parallel operates somewhat differently from a mechanical perspective, but employs the identical essential logic.Â First, agents are initialized, and then each time step they will survey the other agents and adjust their strategies according to the prescriptions of the replicator dynamic.Â Importantly, each agent determines their next allocation of rocks, papers, and scissors at the same time, and then each agent updates their allocation at the same time.Â This twostep process ensures that each agent operates from the same information on the population and that the order in which they make their decisions do not matter.
The initialization of agents proceeds as follows (the prefix â€œmy.â€ denotes an agent variable so as to avoid confusion with other local or variables):
 set my.id g.state.id
 set g.state.id g.state.id + 1
 set my.rocks 0
 set my.papers 0
 set my.scissors 0
 set my.next.rocks 0
 set my.next.papers 0
 set my.next.scissors 0
 state.set.random.strategy
 let _red random 255
 let _blue random 255
 let _green random 255
 let _colors []
 set _colors fput _green _colors
 set _colors fput _blue _colors
 set _colors fput _red _colors
 set my.color _colors
 set color my.color
Where â€œstate.set.random.strategyâ€ provides a normalized mix of rocks, papers, and scissors to the state (the prefix â€œ_â€ denotes a local variable that is used in the decision making routine and is then discarded from memory:
 let _rocks randomfloat 1.0
 let _papers randomfloat 1.0
 let _scissors randomfloat 1.0
 let _total _rocks + _papers + _scissors
 set my.rocks _rocks / _total
 set my.papers _papers / _total
 set my.scissors 1.0 – (my.rocks + my.papers)
During each time step, each agent performs the calculations needed for the replicator dynamic:
 let _other.rocks sum [my.rocks] of other states
 let _other.papers sum [my.papers] of other states
 let _other.scissors sum [my.scissors] of other states
 let _total.capabilities _other.rocks + _other.papers + _other.scissors
 let _prob.encounter.rock _other.rocks / _total.capabilities
 let _prob.encounter.paper _other.papers / _total.capabilities
 let _prob.encounter.scissors 1.0 – (_prob.encounter.rock + _prob.encounter.paper)
 let _rock.payoff _prob.encounter.rock * g.r.r.payoff + _prob.encounter.paper * g.r.p.payoff +Â _prob.encounter.scissors * g.r.s.payoff
 let _paper.payoff _prob.encounter.rock * g.p.r.payoff + _prob.encounter.paper * g.p.p.payoff +Â _prob.encounter.scissors * g.p.s.payoff
 let _scissor.payoff _prob.encounter.rock * g.s.r.payoff + _prob.encounter.paper * g.s.p.payoff +Â _prob.encounter.scissors * g.s.s.payoff
 let _initial.fitness g.initial.fitness
 if _initial.fitness + _rock.payoff + _paper.payoff + _scissor.payoff <= 0
 [

 set _initial.fitness (_rock.payoff + _paper.payoff + _scissor.payoff – 0.1) * 1
 ]
 let _rock.fitness _initial.fitness + _rock.payoff
 let _paper.fitness _initial.fitness + _paper.payoff
 let _scissor.fitness _initial.fitness + _scissor.payoff
 let _average.fitness _prob.encounter.rock * _rock.fitness + _prob.encounter.paper * _paper.fitness +Â _prob.encounter.scissors * _scissor.fitness
 set my.next.rocks my.rocks * _rock.fitness / _average.fitness
 if my.next.rocks < 0
 [

 set my.next.rocks 0
 ]
 set my.next.papers my.papers * _paper.fitness / _average.fitness
 if my.next.papers < 0
 [

 set my.next.papers 0
 ]
 set my.next.scissors my.scissors * _scissor.fitness / _average.fitness
 if my.next.scissors < 0
 [

 set my.next.scissors 0
 ]
The rule replicates the population level replicator dynamic inside each agent.Â At the conclusion of the algorithm, there are checks to ensure that the future percentage of rocks, papers, and scissors are zero or greater, and do not drift into the impossible negative numbers.Â Such results are computational artifacts that can occur as a result of the asymptotic approach to zero.
After each agent determines its future strategies, it updates its portfolio to match the future allocation:
 set my.rocks my.next.rocks
 set my.papers my.next.papers
 set my.scissors my.next.scissors
The results of the ABM containing 900 agents can be seen in graphical form below, each time replicating the experiments performed in the earlier posting:
Win =  2 
Lose =  2 
Draw =  0 
Initial Fitness =  5 
 The population dynamics of the ABM
ï»¿ï»¿
 The population dynamics of the original mathematical model
 The differences of the ABM and mathematical model for each of the three strategies
The next experiment altered the payoff of a draw to be 0.2:
Win =  2 
Lose =  2 
Draw =  0.2 
Initial Fitness =  5 
 The population dynamics of the ABM
 The population dynamics of the original mathematical model
 The differences of the ABM and mathematical model for each of the three strategies
The setting of draw to be equal to a loss, 2:
Win =  2 
Lose =  2 
Draw =  2 
Initial Fitness =  5 
 The population dynamics of the ABM
 The population dynamics of the original mathematical model
 The differences of the ABM and mathematical model for each of the three strategies
The setting of Draw to 0.2:
Win =  2 
Lose =  2 
Draw =  0.2 
Initial Fitness =  5 
 The population dynamics of the ABM
 The population dynamics of the original mathematical model
 The differences of the ABM and mathematical model for each of the three strategies
The setting of Draw to 2:
Win =  2 
Lose =  2 
Draw =  2 
Initial Fitness =  5 
 The population dynamics of the ABM
 The population dynamics of the original mathematical model
 The differences of the ABM and mathematical model for each of the three strategies
In each case, the agent model replicates the mathematical model, meaning that the transfer of the logic of the populationâ€™s dynamics can be transformed into the decisionmaking logic of individual agents, albeit with the assumptions discussed earlier.Â Critically, from a modeling perspective, the transition to an ABM framework allows for the incorporation of new features and concepts into the model, allowing for continued extensions to capture additional features of geography, heterogeneous/imperfect information, differences in payoffs based on additional agent attributes, etc.Â These extensions with be implemented and examined in future postings.
The NetLogo implementation of the Replicator Equation used in this post can be downloaded here (note that the file extension .txt should be changed to .nlogo for running in NetLogo).