Thanks to Kat for this, from oztam.com

People ? often do not have a good sense of the limitations of sample-based research. Warren Cordell, chief statistical officer at Nielsen for many years, devised a wonderful visual explanation for [the United States] Congress, which went as follows. The picture (below) is comprised of several hundred thousand tiny dots (the population).

The three smaller pictures contain 250, 1,000 and 2,000 dots (the samples). They are ‘area probability’ samples of the original picture, because the dots are distributed in proportion to their distribution in the picture. If we think of homes [or persons, consumers] instead of dots, this is the sampling method used for most media research studies.

Now move back 30 inches or so. When the eye stops trying to read the dots, even the smallest sample provides a recognisable picture (you can use top-line data). But you would have trouble picking her out of a group of women based on the 250-dot sample (do not try reading demographic breaks). At 1,000 dots, if you squint to read the pattern of light and dark, you would recognise her in a group (now you can read major demographics). At 2,000 dots, you see her more clearly – but the real improvement is between 250 and 1,000 – an important point. In sampling, the ability to see greater detail is a ‘squared function’ – it takes four times as large a sample to see twice the detail. This is the strength and weakness of sample-based research. You get the general picture cheap, but precision costs a bundle.