Search NKS | Online

I do not know exactly what made me start looking more carefully at simple initial conditions, though I believe that I first systematically generated high-resolution pictures of all the k = 2 , r = 1 cellular automata as an exercise for an early laserprinter—probably at the beginning of 1984. And I do know that for example on June 1, 1984 I printed out pictures of rule 30, rule 110 and k = 2 , r = 2 totalistic code 10 (see note below ), took them with me on a flight from New York to London, and a few days later was in Sweden talking about randomness in rule 30 and its potential significance. … But cellular automata—and especially 1D ones—make the phenomena particularly clear, which is why even after investigating all sorts of other systems 1D cellular automata are still the most common examples that I use in this book.
In the 1940s it also became popular to use frequency modulation (FM) Sin[(1 + s[t]) ω t] , and in the 1970s pulse code modulation (PCM) (pulse trains for IntegerDigits[s[t], 2] ). … Most have characteristic almost perfectly repetitive forms (radar pulses, for example, typically have the chirped form Sin[(1 + α t) ω t] )—and some sound uncannily like pulsars.
So long as the so-called Courant condition dt/dx < 1/2 is satisfied, the results are correct.
(Constants such as 1 or ∅ can be viewed as 0-argument operators.)
. • 1930s: The 3n + 1 problem (see page 904 ) is posed, and unpredictable behavior is found, but the main focus is on proving a simple result about it. • Late 1940s and 1950s: Pseudorandom number generators are developed (see page 974 ), but are viewed as tricks whose behavior has no particular scientific significance. • Late 1940s and early 1950s: Complex behavior is occasionally observed in fairly simple electronic devices built to illustrate ideas of cybernetics, but is usually viewed as something to avoid. • 1952: Alan Turing applies computers to studying biological systems, but uses traditional mathematical models rather than, say, Turing machines. • 1952-1953: John von Neumann makes theoretical studies of complicated cellular automata, but does not try looking at simpler cases, or simulating the systems on a computer. • Mid-1950s: Enrico Fermi and collaborators simulate simple systems of nonlinear springs on a computer, but do not notice that simple initial conditions can lead to complicated behavior. • Mid-1950s to mid-1960s: Specific 2D cellular automata are used for image processing; a few rules showing slightly complex behavior are noticed, but are considered of purely recreational interest. • Late 1950s: Computer simulations of iterated maps are done, but concentrate mostly on repetitive behavior. (See page 918 .) • Late 1950s: Ideas from dynamical systems theory begin to be applied to systems equivalent to 1D cellular automata, but details of specific behavior are not studied except in trivial cases. • Late 1950s: Idealized neural networks are simulated on digital computers, but the somewhat complicated behavior seen is considered mainly a distraction from the phenomena of interest, and is not investigated. … They discover various examples (such as "munching foos") that produce nested behavior (see page 871 ), but do not go further. • 1962: Marvin Minsky and others study many simple Turing machines, but do not go far enough to discover the complex behavior shown on page 81 . • 1963: Edward Lorenz simulates a differential equation that shows complex behavior (see page 971 ), but concentrates on its lack of periodicity and sensitive dependence on initial conditions. • Mid-1960s: Simulations of random Boolean networks are done (see page 936 ), but concentrate on simple average properties. • 1970: John Conway introduces the Game of Life 2D cellular automaton (see above ). • 1971: Michael Paterson considers a class of simple 2D Turing machines that he calls worms and that exhibit complicated behavior (see page 930 ). • 1973: I look at some 2D cellular automata, but force the rules to have properties that prevent complex behavior (see page 864 ). • Mid-1970s: Benoit Mandelbrot develops the idea of fractals (see page 934 ), and emphasizes the importance of computer graphics in studying complex forms. • Mid-1970s: Tommaso Toffoli simulates all 4096 2D cellular automata of the simplest type, but studies mainly just their stabilization from random initial conditions. • Late 1970s: Douglas Hofstadter studies a recursive sequence with complicated behavior (see page 907 ), but does not take it far enough to conclude much. • 1979: Benoit Mandelbrot discovers the Mandelbrot set (see page 934 ) but concentrates on its nested structure, not its overall complexity. • 1981: I begin to study 1D cellular automata, and generate a small picture analogous to the one of rule 30 on page 27 , but fail to study it. • 1984: I make a detailed study of rule 30, and begin to understand the significance of it and systems like it.
(This is related to the Whitney embedding theorem that any d -dimensional manifold can be embedded in (2d + 1) -dimensional space.)
In the late 1970s it was noted that by evaluating PowerMod[a, n - 1, n]  1 for several random integers a one can with high probability quickly deduce PrimeQ[n] .
Elementary ( k = 2 , r = 1 ) cellular automata can be found only up to separations s = 2 .
For if its charge was distributed over a sphere of radius r , this was expected to lead to electrostatic repulsion energy proportional to 1/r . … In the 1980s superstring theory introduced the idea that particles might actually be tiny 1D strings—with different types of particles corresponding essentially just to strings in different modes of vibration.
Note that the equation can also be written R μ ν = 8 π G (T μ ν - 1/2 T μ μ g μ ν /c 4 ) .) … At a physical level, the full Einstein equations can be interpreted as saying that the volume v of a small ball of comoving test particles satisfies ∂ tt v[t]/v[t]  -1/2 ( ρ + 3 p) where ρ is the total energy density and p is the pressure averaged over all space directions.
1 ... 63646566