A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. This way, a self-replicating automata will reproduce its states in other places of the grid in a certain period of time.
The concept of self-replicating automata was designed by John von Neumann in the 1940s, but was very complex due to be implemented. With some simplifications, Christopher Langton was able to significantly reduce the automaton's complexity. As well, some other self-replicating automata were developed afterwards, as the Byl and Chou-Reggia that we will see because its simplicity.
Von Neumann cellular automata are the original expression of cellular automata, the development of which were prompted by suggestions made to John von Neumann by his close friend and fellow mathematician Stanislaw Ulam. Their original purpose was to provide insight into the logical requirements for machine self-replication and were used in von Neumann's universal constructor.
From A.W. Burks, in his "Essays on Cellular Automata (1968):
Von Neumann was interested in the general question: What kind of logical organization is sufficient for an automaton to be able to reproduce itself?. The question is not precise and admits to trivial versions as well as interesting ones. Von Neumann had the familiar natural phenomenon of self-reproduction in mind when he posed it, but he was not trying to simulate the self-reproduction of a natural system at the levels of genetics and biochemistry. He wished to abstract from the natural self-reproduction problem its logical form.
Von Neumann's approach to the problem of self-reproduction was a classically logico-mathematical one: If self-reproduction is being carried out by a (highly complex) biochemical machine, then that machine's behavior is describable as a logical sequence of steps, i.e. as an algorithm. Now, if an algorithm can be performed by any machine at all, then there is a Turing machine which can perform the same algorithm. If such a Turing machine exists, it is entirely plausible that the processes by which living organisms reproduce themselves, and by implication, other processes on which life itself is based, are algorithmically describable and, therefore, that life i/self is achievable by machines (a similar tenet is held by AI researchers with regard to the processes underlying intelligence).
Von Neumann was able to exhibit a universal Turing machine embedded in a cellular array using 29-states per cell and the 5-cell neighborhood. Such a machine is called a universal constructor. His machine will construct any machine described on its input tape and, in addition, will also construct a copy of the input tape and attach it to the machine it has constructed.
The neighborhood (a grouping function) is part of the state-transition function, and defines for any cell the set of other cells upon which the state of that cell depends. All cells transition state synchronously, in step with a universal "clock" as in a synchronous digital circuit. Each cell space can accept any of the 29 states of the rule-set.
The question then arises: How simple can a machine become while still retaining the capacity to reproduce itself?. The configuration must treat its stored information in the two different manners mentioned above: interpreted, as instructions to be executed (translation), and uninterpreted, as data to be copied (transcription). These function were devised by Langton by a basic structure consisting of a string of cells in state 1 ("core" cells) surrounded by ceils in state 2 ("sheath" cells). The core cells will transport signals, i.e the data to be copied or genetic information if talking about living organisms.
Signals are grouped into sequences, which are treated as instructions to effect certain actions. One such action, when it arrives at the cap at the end of a data-path, results in the data-path being extended by one cell. There is a similar signal sequence for retracting the data-path by one cell. There are also signal sequences which effect extension to the left or right instead of straight ahead, along with associated sequences to effect retraction of a left or right corner in a data-path.
The figure above is th strarting configutation of the Langton's loop. In it, we observe the sheat cells in red while the core cells have different colors depending of their functions. Every "cyan + black" instruction set will extend the construction arm by one cell, so after each cycle of the instructions around the loop six cells will be created in the construction arm. The "yellow + black" instruction set will build a left hand corner at the end of the arm. So, after every six new cells, the arm will turn to the left to be expanded. And after four arm expansions, the arm will loop to itself and will be detached from its parent.
In a nutshell, Langton's loop consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm" (or pseudopod), which will become the daughter loop. The "genes" instruct it to make three left turns, completing the loop, which then disconnects from its parent.
Langton's Loops run in a CA that has 8 states, and uses the von Neumann neighborhood with rotational symmetry. They are unable to reproduce into the space occupied by another loop. Thus, once a loop is surrounded, it is incapable of reproducing, resulting in a coral-like colony with a thin layer of reproducing organisms surrounding a core of inactive "dead" organisms.
Below there is a Langton's loop. run the simulation to observe its development and how its reproduced.
In the example below, click on the OpenProcessing logo to go to the web. In the code, open the "Langton" tab. The "langtonFigure" variable (line 10) describes the initial configuration of the loop. Every digit correspond to a different state, that will be displayed in a different color. The color code is described in a commentary above the variable. The "langtonRules" variable (line 36) describes the behaviour of the automata, i.e. what will be the next state of a cell according its current state and the states of its neighbours. Each number is one rule. The first digit is the current state of a cell, the next digits are the current states of its neighbours, and the last digit is the state of the cell at the next time-step. This way, the rule-set codifies the actions that are performed to calculate the states of the board at the next time-step.
We have seen how the configuration of a cell and its neighbours may codify an action by defining a rule. The rule makes a cell change its state at the next timestep. Now, we will try changing some rules to analyse its function. In the code, change the values of the "langtonRules" and explain how they affect to the behaviour of the automata.
In [2]:
%%HTML
<iframe src="https://www.openprocessing.org/sketch/433895/embed/" width="640" height="480"></iframe>
The Byl's loop was developed just a few years after Langton's loop. It simplified Langton's automaton, consisted of an array of 12 chips and 43 transition rules. Essentially, the simplification consisted in using fewer cellular states (6 as compared with Langton's 8) and a smaller replicating loop (12 cells as compared with Langton's 86).
The Byl's loop is a simplification of the Langston's loop. What are they differences?
Now we will modify its behaviour. In the code, look for the "bylRules" variable at the "Byl" tab and reason the effect of this changes:
In [4]:
%%HTML
<iframe src="https://www.openprocessing.org/sketch/433896/embed/" width="640" height="490"></iframe>
A further reduction of the loop by removing the sheath of the automata.
Look at the below example of the Chou-Reggia's loop and explain the differences with the previous loops.
In [5]:
%%HTML
<iframe src="https://www.openprocessing.org/sketch/433897/embed/" width="640" height="480"></iframe>