Notes

Chapter 12: The Principle of Computational Equivalence

Section 8: Undecidability and Intractability


[Classes of] fast algorithms

Most of the fast algorithms now known seem to fall into a few general classes. The most common are ones based on repetition or iteration, classic examples being Euclid's algorithm for GCD (page 915), Newton's method for FindRoot and the Gaussian elimination method for LinearSolve. Starting in the 1960s it began to be realized that fast algorithms could be based on nested or recursive processes, and such algorithms became increasingly popular in the 1980s. In most cases, the idea is recursively to divide data into parts, then to do operations on these parts, and finally reassemble the results. An example is the algorithm of Anatolii Karatsuba from 1961 for finding products of n-digit numbers (with n = 2s) by operating on their digits in the nested pattern of page 608 (see also page 1093) according to

First[f[IntegerDigits[x, 2, n], IntegerDigits[y, 2, n], n/2]]

f[x_, y_, n_] := If[n < 1, x y, g[Partition[x, n], Partition[y, n], n]]

g[{x1_, x0_}, {y1_, y0_}, n_] := With[{z1 = f[x1, y1, n/2], z0 = f[x0, y0, n/2]}, z1 22n + (f[x0 + x1, y0 + y1, n/2] - z1 - z0)2n + z0]

Other examples include the fast Fourier transform (page 1074) and related algorithms for ListConvolve, the quicksort algorithm for Sort, and many algorithms in fields such as computational geometry. Starting in the 1980s fast algorithms based on randomized methods (see page 1192) have also become popular. But particularly from the discoveries in this book, it seems likely that the very fastest algorithms for many kinds of problems will not in the end have the type of regular structure that characterizes almost all algorithms currently used.



Image Source Notebooks:

From Stephen Wolfram: A New Kind of Science [citation]