Notes

Chapter 10: Processes of Perception and Analysis

Section 7: Visual Perception


Halftoning

In printed books like this one, gray levels are usually obtained by printing small dots of black with varying sizes. On displays consisting of fixed arrays of pixels, gray levels must be obtained by having only a certain density of pixels be black. One way to achieve this is to break the array into 2n × 2n blocks, then successively to fill in pixels in each block until the appropriate gray level is reached, as in the pictures below, in an order given for example by

Nest[Flatten2D[{{4 # + 0, 4 # + 2}, {4 # + 3, 4 # + 1}}] &, {{0}}, n]

An alternative to this so-called ordered dither approach is the Floyd–Steinberg or error-diffusion method invented in 1976. This scans sequentially, accumulating and spreading total gray level in the data, then generating a black pixel whenever a threshold is exceeded. The method can be implemented using

Module[{a = Flatten[data], r, s}, {r, s} = Dimensions[data]; Partition[Do[ai + {1, s - 1, s, s + 1} += m (ai - If[ai < 1/2, 0, 1]), {i, r s - s - 1}]; Map[If[# < 1/2, 0, 1] &, a], s]]

In its original version m = {7, 3, 5, 1}/16, as in the first row of pictures below. But even with m = {1, 0, 1, 0}/2 the method generates fairly random patterns, as in the second row below. (Note that significantly different results can be obtained if different boundary conditions are used for each row.)

To give the best impression of uniform gray, one must in general minimize features detected by the human visual system. One simple way to do this appears to be to use nested patterns like the ones below.

From Stephen Wolfram: A New Kind of Science [citation]