Search NKS | Online

Beginning as early as the 1700s, the foundations of statistical analysis have been vigorously debated, with a succession of fairly specific approaches being claimed as the only ones capable of drawing unbiased conclusions from data.
For being unable to draw geometrical figures with infinite accuracy this seemed the only way to establish anything with certainty.
But for example the basic notion of programmability seems at some level quite easy even for young children to grasp—even though historically it appeared only recently.
But already the sequence of values for x 2 y - x y 3 or even x (y 2 + 1) seem quite complicated. And for example from the fact that x 2  y 2 + (x y ± 1) has solutions Fibonacci[n] it follows that the positive values of (2 - (x 2 - y 2 - x y) 2 )x are just Fibonacci[n] (achieved when {x, y} is Fibonacci[{n, n - 1}] ). … But I suspect that in the end it will take only a surprisingly simple polynomial, perhaps with just three variables and fairly low degree.
And in the 1960s, particularly after Frank Rosenblatt 's introduction of perceptrons, neural networks were increasingly used only as systems for specific visual and other tasks (see page 1076 ). … But by the mid-1990s it was becoming clear that—probably in large part as a consequence of reliance on methods from traditional mathematics—typical neural network models were mostly being successful only in situations where what was needed was a fairly straightforward extension of standard continuous probabilistic models of data.
In its original form the metric becomes singular at radius 2 G m/c 2 (or 3 m km with m in solar masses). … In 1960 it was realized, however, that appropriate coordinates allowed smooth continuation across the event horizon—and that the only genuine singularity was infinite curvature at a single point at the center. … If one assumes that space is both homogeneous and isotropic then it turns out that only ordinary flat Minkowski space is allowed.
But following the work of Isaac Newton in the late 1600s it increasingly came to be believed that systems could only meaningfully be described by the mathematical equations they satisfy, and not by any explicit mechanism or rules.
P completeness If one allows arbitrary initial conditions in a cellular automaton with nearest-neighbor rules, then to compute the color of a particular cell after t steps in general requires specifying as input the colors of all 2t + 1 initial cells up to distance t away (see page 960 ). And if one always does computations using systems that have only nearest-neighbor rules then just combining 2t + 1 bits of information can take up to t steps—even if the bits are combined in a way that is not computationally irreducible.
And at a formal level, the two cases are so similar that in studying partial differential equations one often starts with an equation, and only later tries to work out whether initial or boundary values are needed in order to get either any solution or a unique solution.
One point to notice is that the sharp change which characterizes any phase transition can only be a true discontinuity in the limit of an infinitely large system. … When the total number of cells increases, however, the fraction of such configurations rapidly decreases, and in the infinite size limit, there are no such configurations, and a truly discontinuous transition occurs exactly at density 1/2.
1 ... 90919293 ...