Evolution of Silence

"Evolution Of Silence" depicts a virtual world populated by artificial creatures. Each creature is assigned a tone based on its DNA, and is represented by a ray of light within a holographic projection. The population of these creatures evolves steadily through an evolutionary algorithm.

The virtual world provides an inaudible key as an environmental condition. The closer a creature's tone matches the key, the louder its tone sounds, the stronger and broader its ray of light is, and the more attractive it is to other creatures, thus allowing it to produce many offspring. Conversely, deviant tones are quiet, have a very thin ray of light, and die relatively quickly.

This process results in the population inside the world evolving from a disharmonic and chaotic soundscape to a harmonic structure, increasingly fitting the key provided by the environment. After two to three minutes, the environment key changes randomly, and the soundscape immediately becomes chaotic again. The population will adapt to this new environment over several generations.

Visitors can influence the evolutionary process by touching the rays of light belonging to the creatures, "feeding" them. Weak, quiet creatures with deviant, disharmonic tones, a thin ray of light, and imminent death become strong and loud when touched or "fed". This makes them more attractive to other creatures, allowing them to produce more offspring. This slows down the process of harmonizing controlled by the evolutionary algorithm and introduces disharmonies.

»Evolution Of Silence« is the result of a series of experiments where I applied evolutionary algorithms to the emergent generation of sounds. Following John Cage, »Silence« is interpreted as the occurrence of all sounds that arise inside an environment by the very existence of all beings and elements – without any intention associated with those sounds: The artificial beings of the virtual world don’t sound to fulfill a musical purpose, they just sound because they exist. Only the environment with its progressing evolutionary process judges every tone and sets it into a harmonic context. This context attributes an intention to every tone, that is however, only valid within this context.

Technical information

The Software is written in Java, based on the Processing library. Sound is generated with SuperCollider, which is controlled by the Java application via the great Supercollider-Client for Processing, created by Daniel Jones. Tracking is realized with a Kinect and Daniel Shiffman's Openkinect-Library.

List of all used libraries: Toxiclibs by Karsten “toxi” Schmidt, DNA by André Sier, Openkinect by Daniel Shiffman, Processing-SC by Daniel Jones, OscP5 and ControlP5 by Andreas Schlegel, ANI by Benedikt Groß