Summary of the Video
A good pizza, says the opening teaser, begins with a good recipe. The same is true of good data—an important aspect of statistics is recipes (called designs ) for producing data.
The distinction between observation and experiment is vital in science. We look first at an observational study, a study of the complex behavior of lobsters. Having learned from observation that lobsters appear to rely heavily on smell, we can try an experiment to test this. Add food coloring to make the water opaque, and note how the lobsters function with smell alone to guide them.
An experiment involves intervening, not just observing. The experimenter imposes some treatment in order to see the response. Now we look at a major medical experiment, the Physicians' Health Study (PHS). The subjects were 20,000 male physicians over age 40. The goal was to see if taking aspirin regularly reduces the risk of a heart attack. Observational studies suggested that aspirin can help; a well-designed experiment gives much more solid data. Half the subjects took aspirin and the other half took a placebo , a dummy pill. The experiment was double-blind —neither the subjects nor the medical personnel who treated them knew who was really taking aspirin. Just being in a study can lead people to change their habits, so if all take aspirin, the effects of aspirin are confounded (mixed up) with the effect of being in the study. The comparison of two groups that are treated exactly alike except for the content of the pill they take avoids confounding.
How should we form the two groups? Accepting volunteers or allowing the experimenters to choose who gets aspirin opens the door to bias. So we let impersonal chance assign subjects to groups. Now we have a randomized comparative experiment. The PHS found that the aspirin group was suffering fewer heart attacks. After five years, 104 of the aspirin group and 189 of the placebo group had suffered heart attacks. The study was stopped because it no longer seemed ethical to give a placebo once there was good evidence that aspirin was effective. Dr. Charles Hennekens, the study director, explains that medical experiments with human subjects are only ethical if there is a balance of doubt and promise: enough promise that the new treatment is beneficial to justify giving it, but enough doubt to forbid simply giving it to all patients.
After comments on the outcome of the PHS, animated graphics show how to do the random assignment of subjects to treatments. In principle, we could draw names from a hat. In practice, we use a table of random numbers. The conclusion emphasizes the basic statistical ideas for designing experiments: comparison, randomization, and repetition on enough subjects so that the systematic effects of the treatments can be seen clearly.