Symbolic Creatures Simulation

This is the home page for the Symbolic Creatures Simulation experiment.
It was implemented by <Angelo Loula>, supervised by Ricardo Gudwin and João Queiroz.


This is a project on Artificial Life where we simulate an ecosystem that allows cooperative interaction between agents, including intra-specific predator-warning communication in a virtual environment of predatory events. We propose, based on Peircean semiotics and informed by neuroethological constraints, an experiment to simulate the emergence of symbolic communication among artificial creatures. Here we describe the simulation environment and the creatures' control architectures.


According to the semiotics of C.S.Peirce, there are three fundamental kinds of signs underlying meaning processes: icons, indexes and symbols (CP 2.275[*]). Icons are signs that stand for their objects through similarity or resemblance (CP 2.276, 2.247, 8.335, 5.73); indexes are signs that have a spatio-temporal physical correlation with its object (CP 2.248, see 2.304); symbols are signs connected to O by the mediation of I. For Peirce (CP 2.307), a symbol is "A Sign which is constituted a sign merely or mainly by the fact that it is used and understood as such, whether the habit is natural or conventional, and without regard to the motives which originally governed its selection." It is "connected with its object by virtue of the idea of the symbol-using mind, without which no such connection would exis" (CP 2.299), a symbol is "a conventional sign, or one depending upon habit (acquired or inborn)" (CP 2.297).

Based on this framework, Queiroz and Ribeiro [2] performed a neurosemiotic analysis of vervet monkeys' intra-specific communication. These primates use vocal signs for intra-specific social interactions, as well as for general alarm purposes regarding imminent predation on the group [3]. They vocalize basically three predator-specific alarm calls which produce specific escape responses: alarm calls for terrestrial predators (such as leopards) are followed by a escape to the top of trees, alarm calls for aerial raptors (such as eagles) cause vervets to hide under bushes, and alarm calls for ground predators (such as snakes) elicit careful scrutiny of the surrounding terrain. Queiroz and Ribeiro[2] identified the different signs and the possible neuroanatomical substrates involved. Icons correspond to neural responses to the physical properties of the visual image of the predator and the alarm-call, and exist within two independent primary representational domains (visual and auditory). Indexes occur in the absence of a previously established relationship between call and predator, when the call simply arouses the receiver's attention to any concomitant event of interest, generating a sensory scan response. If the alarm-call operates in a sign-specific way in the absence of an external referent, then it is a symbol of a specific predator class. This symbolic relationship implies the association of at least two representations of a lower order in a higher-order representation domain.

Simulating Artificial Semiotic Creatures

The framework (above) guided our experiments of simulating the emergence of symbolic alarm calls. The environment is bi-dimensional having approximately 1000 by 1300 positions. The creatures are autonomous agents, divided into preys and predators. There are objects such as trees (climbable objects) and bushes (used to hide), and three types of predators: terrestrial predator, aerial predator and ground predator. Predators differentiate by their visual limitations: terrestrial predators can't see preys over trees, aerial predators can't see preys under bushes, but ground predators don't have these limitations. The preys can be teachers, which vocalizes pre-defined alarms to predators, or learners, which try to learn these associations. There is also the self-organizer prey, which is a teacher and a learner at the same time, able to create, vocalize and learn alarms, simultaneously.

The sensory apparatus of the preys include hearing and vision; predators have only a visual sensor. The sensors have parameters that define sensory areas in the environment, used to determine the stimuli the creatures receive. (Figure 1) Vision has a range, a direction and an aperture defining a circular section, and hearing has just a range defining a circular area. These parameters are fixed, with exception to visual direction, changed by the creature, and visual range increased during scanning. The received stimuli correspond to a number, which identifies the creature or object seen associated with the direction and distance from the stimulus' receiver.

(a) (b)

Figure 1: Sensorial systems and parameters. (a) Vision sensorial area is determined by a range (rg), an aperture (ap) and a direction (dr). The item 1 will be sensored by the creature, but not 2 (out of aperture and direction) neither 3 (out of range). (b) Hearing sensorial area is determined just by a range (rg). The item 1 and 2 will be sensored by the creature, but not 3 (out of range).

The creatures have interactive abilities, high-level motor actions: adjust visual sensor, move, attack, climb tree, hide on bush, and vocalize. (Figure 2) These last three actions are specific to preys, while attacks are only done by predators. The creatures can perform actions concomitantly, except for displacement actions (move, attack, climb and hide) which are mutually exclusive. The move action changes the creature position in the environment and takes two parameters velocity (in positions/interaction, limited to a maximum velocity) and a direction (0-360 degrees). The visual sensor adjustment modifies the direction of the visual sensor (and during scanning, doubles the range), and takes one parameter, the new direction (0-360 degrees). The attack action has one parameter that indicates the creature to be attacked, that must be within action range. If successful the attack increases an internal variable, number of attacks suffered, from the attacked creature. The climb action takes as a parameter the tree to be climbed, that must be within the action range. When up in a tree, an internal variable called 'climbed' is set to true; when the creature moves it is turned to false and it goes down the tree. Analogously, the hide action has the bush to be used to hide as a parameter, and it uses an internal variable called 'hidden'. The vocalize action has one parameter the alarm to be emitted, a number between 0 and 99, and it creates a new element in the environment that lasts just one interaction, and is sensible by creatures having hearing sensors.


Figure 2: Creature actions and parameters/limitations. (a) Movement needs a direction and a speed: the creature chooses a direction of movement and the speed at instant t, and at t+1, it will be in a new location.  (b) Sensor adjustment needs a direction: the creature chooses sensor direction at instant t, and t+1, the sensor will have the new direction. (c) Attack needs another creature nearby: the creature can attack entity 1, inside action range, but not entity 2, and after a attack, the attacked entity is removed.  (d) Climb needs a tree nearby: the creature can climb tree entity 1, but not 2. (e) Hide needs a bush nearby: the creature can hide on bush entity 1, but not 2. (f) Vocalizing needs an alarm: the creature vocalizes an alarm at the instant t, and at the next instant, t+1, the entity 1 is able to hear it, but not entity 2 because it’s out of its hearing range.

To control their actions after receiving the sensory input, the creatures have a behavior-based architecture [4], dedicated to action selection [5]. Our control mechanism is composed of various behaviors and drives. Behaviors are independent and parallel modules that are activated at different moments depending on the sensorial input and the creature's internal state. At each iteration, behaviors provide their motivation value (between 0 and 1), and the one with highest value is activated and provides the creature actions at that instant. Drives define basic needs, or 'instincts', such as 'fear' or 'hunger', and they are represented by numeric values between 0 and 1, updated based on the sensorial input or time flow. This mechanism is not learned by the creature, but rather designed, providing basic responses to general situations.

Predators' cognitive architecture

The predators have a simple control architecture with basic behaviors and drives. The drives are hunger and tiredness, and the behaviors are wandering, rest and prey chasing. (Figure 3) The drives are implemented as follows:

formula 1


Figure 3: Predator cognitive architecture.

The wandering behavior has a constant motivation value of 0.4, and makes the creature basically move at random direction and velocity, directing its vision toward movement direction. The resting behavior makes the creature stop moving and its motivation is given by

formula 2

The behavior chasing makes the predator move towards the prey, if its out of range, or attack it, otherwise. The motivation of this behavior is given by

formula 3
Preys' cognitive architecture

Preys have two sets of behavior: communication related behaviors and general behaviors. The communication related behaviors are vocalizing, scanning, associative learning and following, the general ones are wandering, resting and fleeing. Associated with these behaviors, there are different drives: boredom, tiredness, solitude, fear and curiosity. The learner and the teacher don't have the same architecture, only teachers have the vocalize behavior and only learners have the associative learning behavior, the scanning behavior and the curiosity drive (Figure 4). On the other hand, the self-organizer prey has all behaviors and drives.


Figure 4: Preys' cognitive architecture: (a) learners have scanning and associative learning capabilities and (b) teachers have vocalizing capability. The self-organizer prey is a teacher and a learner at the same time and has all these behaviors.

The prey's drives are specified by the expressions
formula 4

The tiredness drive is computed by the same expression used by predators.

The vocalize behavior and associative learning behavior can run in parallel with all other behaviors, so it does not undergo behavior selection. The vocalize behavior makes the prey emit an alarm when a predator is seen. The teacher has a fixed alarm set, using alarm number 1 for terrestrial predator, 2 for aerial predator and 3 for ground predator. The self-organizer uses the alarm with the highest association value in the associative memory (next section), or chooses randomly an alarm from 0 to 99 and places it in the associative memory, if none is known. (The associative learning behavior is described in the next section.)

The scanning behavior makes the prey turn towards the alarm emitter direction and move at this direction, if an alarm is heard, turn to the same vision direction of the emitter, but still moving towards the emitter, if the emitter is seen, or keep the same vision and movement direction, if the alarm is not heard anymore. The motivation is given by curiosity(t), if an alarm is heard or if curiosity(t)>0.2. This behavior also makes the vision range double, simulating a wide sensory scanning process.

To keep preys near each other and not spread out in the environment, the following behavior makes the prey keep itself between a maximum and a minimum distance of another prey, by moving towards or away from it. This was inspired by experiments in simulation of flocks, schools and herds . The motivation for following is equal to solitude(t), if another prey is seen.

The fleeing behavior has its motivation given by fear(t). It makes the prey move away from the predator with maximum velocity, or in some situations, perform specific actions depending upon the type of prey. If a terrestrial predator is or was just seen and there's a tree not near the predator (the difference between predator direction and tree direction is more than 60 degrees), the prey moves toward the tree and climbs it. If it is an aerial predator and there's a bush not near it, the prey moves toward the bush and hides under it. If the predator is not seen anymore, and the prey is not up on a tree or under a bush, it keeps moving in the same direction it was before, slightly changing its direction at random.

The wandering behavior makes the prey move at a random direction and velocity, slightly changing it at random. The vision direction is alternately turn left, forward and right. The motivation is given by boredom(t), if the prey is not moving and boredom(t)>0.2, or zero, otherwise. The resting behavior makes the prey stop moving, with a motivation as for predators.

Associative Learning

The associative learning allows the prey to generalize spatial-temporal relations between external stimuli from particular instances. The mechanism is inspired on the neuroethological and semiotic constraints described previously, implementing a lower-order sensory domain through work memories and a higher order multi-modal domain by a associative memory (Figure 5a).


Figure 5: (a) Associative learning architecture. (b) Association adjustment rules.

The work memories are temporary repositories of stimuli: when a sensorial stimulus is received from either sensor (auditory or visual), it is placed on the respective work memory with maximum strength, at every subsequent iteration it is lowered and when its strength gets to zero it is removed. The strength of stimuli in the work memory (WM) varies according to the expression

formula 5

The items in the work memory are used by the associative memory to produce and update association between stimuli, following basic Hebbian learning (Figure 5b). When an item is received in the visual WM and in the auditory WM, an association is created or reinforced in the associative memory, and changes in its associative strength are inhibited. Inhibition avoids multiple adjustments in the same association caused by persisting items in the work memory. When an item is dropped from the work memory, its associations not inhibited, i.e. not already reinforced, are weakened, and the inhibited associations have their inhibition partially removed. When the two items of an inhibited association are removed, the association ends its inhibition, being subject again to changes in its strength. The reinforcement and weakening adjustments for non-inhibited associations, with strengths limited to the interval [0.0; 1.0], are done as follows:

formula 6

As shown in figure 4, the associative learning can produce a feedback that indirectly affects drives and other behaviors. When an alarm is heard and it is associated with a predator, a new internal stimulus is created composed of the associated predator, the association strength, and the direction and distance of the alarm, which is used as an approximately location of the predator. This new stimulus will affect the fear drive and fleeing behavior. The fear drive is changed to account for this new information, which gradually changes fear value:

formula 7

This allows the associative learning to produce an escape response, even if the predator is not seen. This response is gradually learned and it describes a new action rule associating alarm with predator and subsequent fleeing behavior. The initial response to an alarm is a scanning behavior, typically indexical. If the alarm produces an escape response due to its mental association with a predator, our creature is using a symbol.

Creatures in Operation

Animated Frames


Here we presented a methodology to simulate the emergence of symbols through communicative interactions among artificial creatures. We propose that symbols can result from the operation of simple associative learning mechanisms between external stimuli. Experiments show that learner preys are able to establish the correct associations between alarms and predators, after exposed to vocalization events. Self-organizers are also able to converge to a common repertoire, even though there were no pre-defined alarm associations to be learned. Symbols learning and use also provide adaptive advantage to creatures when compared to indexical use of alarm calls.  (Check results in publications)


[1]  Peirce, S.S. (1931-1958) Collected Papers of Charles Sanders Peirce, 8 volumes, vols. 1-6, eds. Charles Hartshorne and Paul Weiss, vols. 7-8, ed. Arthur W. Burks. Cambridge, Mass.: Harvard University Press.

[2] Queiroz, J. and Ribeiro, S. (2002) The biological substrate of icons, indexes and symbols in animal communication: a neurosemiotic analysis of Vervet monkey alarm-calls. In
M. Shapiro (Ed.), The Peirce Seminar Papers – The State of the Art. Vol. 5. Berghahn Books. pp. 69-78.

[3] Seyfarth, R. and Cheney, D. (1992). Meaning and mind in monkeys. Scientific American 267(6):122-128.

[4] Brooks R. (1991). Intelligence without representation. Artificial Intelligence 47 (1-3),  139–159.

[5] Franklin, S. (1997). Autonomous agents as embodied AI. Cybernetics and Systems 28 (6), 499-520.

End Notes

[*] The work of Charles Sanders Peirce[1] is cited as CP followed by volume and paragraph.

eXTReMe Tracker