Doherty explores how misinformation spreads through the media, focusing particularly on commonly-​quoted statistical errors.

Despite all the rhetoric from Thomas Jefferson down to the latest self-​important musings of journalists about journalism’s being the first, best hope for a healthy polity, your newspaper is lying to you. While assuring you that it provides precise information about public policy issues, in many cases it is only pushing speculation and rumor in the guise of fact. Most of the time you have no independent way to confirm its claims, so how can you tell when a newspaper is lying?

Here’s a hint: watch out for the numbers. Newspapers are filled with contextless reports of the latest things government officials have said or decided. But newspapers do like to throw in a number now and then to add verisimilitude to the tales they tell.

Knowledge of the media’s inability to get it straight, especially when dealing with numbers and statistics, has become widespread enough to inspire a widely reviewed book—Tainted Truth: The Manipulation of Fact in America by Cynthia Crossen. It has also given rise to a new magazine, the quarterly Forbes MediaCritic, the latest addition to the Forbes family of publications.

While ideologues of all persuasions like to blame media inaccuracies on political biases, the causes of journalism’s troubles are, unfortunately, inherent in the way daily newspapers, those first drafts of history, are written: hurriedly and by generalists who, even if they are unfailingly scrupulous (which can’t always be assumed), are often ignorant of the topics on which they write and depend blindly on what others tell them—and what others tell them is very often biased. Unfortunately, those first drafts of history are all most laypersons read.

The Problems with Numbers

Our intellectual culture is drunk on numbers, addicted to them: we need them in every situation, we feel utterly dependent on them. As sociologist Richard Gelles aptly put it in a July 25, 1994, Newsweek story on the media’s problems with numbers, “Reporters don’t ask, ‘How do you know it?’ They’re on deadline. They just want the figures so they can go back to their word processors.” The culture of the poll dominates: the foolish notion that not only every fact but every thought, whim, and emotion of the populace can be stated in scientifically valid and valuable numbers.

The lust for numbers can, at its best, lead people to do hard research and dig up interesting and useful information. More often, however, it leads to dignifying guesses with misleadingly precise numbers. For example, it wasn’t enough to know that people were dying in Somalia; as Michael Maren reports in the Fall 1994 Forbes MediaCritic, reporters felt it necessary to latch onto some relief workers’ guesses and repeat them over and over, only occasionally letting slip honest acknowledgments that no one really knew how many people were actually dying and that no one was taking the trouble to attempt accurate counts.

The obsession with numbers leads to particularly egregious errors in reports on economic figures and aggregates and the federal budget. Those errors include calling spending that doesn’t equal what had been planned a “cut” in spending, even if more is being spent than the year before; relying on static economic analysis, especially when calculating the effects of tax increases and their concomitant revenues (because they assume that people do not change their behavior when their taxes are raised, members of Congress and reporters make grievously wrong predictions about expected revenues); and relying uncritically on numerical tools such as the Consumer Price Index.

Especially during the 1992 election, “quintile” analysis of the effects of the Reagan-​Bush years on income and tax-​burden equality abounded, with hardly any explanation of the complications of such analyses. Those complications include the fact that when people in lower income quintiles become richer, they often move into a higher quintile rather than buoy the average of the lower one. Yet income added to the highest quintile can do nothing but increase that quintile’s average income. That creates a misleading picture of the rich getting much richer while the poor stagnate.

Quintile analysis is also static, but income mobility is common in America, so it’s not always the same people who languish in lower quintiles or whoop it up at the top. And quintile analysis often relies on households, not individuals— the top quintile can have more than 20 percent of Americans, the bottom less than 20 percent. But all of those complications are overlooked in the media’s craving for numbers to toss around.

The media even ignore the fact that “counts” of macroeconomic variables can change retroactively—1993 data on 1992 quantities can be different from 1994 data. As an example, in 1993 the Bureau of Labor Statistics listed Arkansas as the state with the highest percentage rise (3 percent) in nonfarm employment from July 1991 to July 1992. Candidate Clinton touted that percentage in campaign ads. But by March 1994 the facts had changed. Although Arkansas was then thought to have had a 3.7 percent rise in employment during the 1991-92 period, it ranked behind Montana’s 4.22 percent and Idaho’s 4.21.

Macroeconomic aggregates, such as gross national product, on which the media often rely for numerical ballast, are often riddled with conceptual problems, such as that of counting as additions to our national product any cash transactions, including the classic example of neighbors’ paying each other to mow each other’s lawns, and ignoring any noncash transaction that adds to economic well- being. Other economic numbers bandied about by the media, such as unemployment rates, job growth, and the “cost” of various tax increases or cuts, are often derived from random samplings, self- reported information, and guesswork. Economics is a study of human action, not of numbers; the press’s over dependence on frequently dubious aggregates helps disguise the problem and muddles readers’ understanding of what economics—and prosperity—is really about.

Where Do the Numbers Come from?

There are many ways to mislead while allegedly presenting accurate counts or measures to the public. The most sinister is to simply make up numbers or make completely bald-​faced guesses. That happens more often than you might think. The demand for information has far outstripped the supply. Coming up with reliable numbers to support all the things that journalists want to say and the public wants to know is often prohibitively expensive, in money or effort, or both. But the misuse and misunderstanding of numbers lead to erroneous reporting.

The total number of breast cancer victims has become a matter of much concern since the National Cancer Institute and the American Cancer Society frightened the world with the declaration that American women face a one-​in-​eight chance of contracting breast cancer. That scary figure, however, applies only to women who have already managed to live to age 95; one out of eight of them will most likely contract breast cancer. According to the NCI’s own figures, a 25-​year-​old woman runs only a 1-in-19,608 risk.

Those very precise figures are themselves based on a phony notion: that we know how many people have breast or any other cancer. As two journalists concerned about cancer admitted in the Nation (September 26, 1994), “Not only is there no central national agency to report cancer cases to … but there is no uniform way that cases are reported, no one specialist responsible for reporting the case.” So any discussion of cancer rates in the United States is based on guesswork, and one can only hope that the guesswork is based on some attempt to be true to the facts as they are known.

In the case of other health threats, such as AIDS, we know that isn’t the case. In The Myth of Heterosexual AIDS, journalist Michael Fumento documented the discrepancy between the rhetoric about the plague-​like threat of AIDS to the non-​gay and non-​drug-​using populace and official statistics on the actual prevalence of the syndrome, which indicated that no more than 0.02 percent of people who tested HIV positive were not in those risk groups. (And even such heterosexual AIDS cases as are recorded run into a self-​reporting problem: many people may not want to admit to anyone that they have had gay sex or used drugs.) As Fumento explained, projections of the future growth of the AIDS epidemic (even ones that were not hysterical pure guesses tossed out by interest groups) were often based on straight extrapolations of earlier doubling times for the epidemic (which inevitably—for any disease—lead to the absurd result of everyone on the planet and then some dying of the disease) or cobbled together from guess piled on guess. Even when the Centers for Disease Control would lower earlier estimates on the basis of new information, or make clearly unofficial speculations about higher numbers, journalists would continue to report the higher and more alarming numbers.

In the case of figures about AIDS in Africa, even the most basic numbers are not to be trusted. Journalist Celia Farber documented in Spin magazine how African health officials inflate the number of deaths from the complications of AIDS, both because AIDS cases attract foreign aid money, whereas traditional African disease and death do not, and because there is no accurate method of counting.

One relief worker told Farber that counts of children orphaned by AIDS in an African village “were virtually meaningless, I made them up myself … then, to my amazement, they were published as official figures in the WHO [World Health Organization] … book on African AIDS… . The figure has more than doubled, based on I don’t know what evidence, since these people have never been here… . If people die of malaria, it is called AIDS, if they die of herpes it is called AIDS. I’ve even seen people die in accidents and it’s been attributed to AIDS. The AIDS figures out of Africa are pure lies.”

In his autobiography, novelist Anthony Burgess gives further insight into the generation of “official” figures. He tells of creating completely fraudulent records of the classes he supposedly taught fellow soldiers while stationed in Gibraltar during World War II. His bogus “statistics were sent to the War Office. These, presumably, got into official records which nobody read.” For the sake of accuracy, we can only hope so. But if a journalist got hold of those numbers, he’d be apt to repeat them.

Similarly farcical figures are taken completely seriously by journalists. For example, activist Mitch Snyder’s assertion that the United States suffered the presence of 3 million homeless people became common wisdom for the bulk of the 1980s. Snyder’s figure was made up; he simply assumed that 1 percent of Americans were homeless to get an initial number of 2.2 million in 1980, then arbitrarily decided that since he knew the problem was getting worse, the number would hit 3 million by 1983. He claimed to be working from extrapolations based on reports from fellow homeless activists around the country, but there was no counting, no surveying, no extrapolation behind his assertion. And yet most major American newspapers reported the number; it became part of our received cultural wisdom.

In her recent book, Who Stole Feminism? How Women Have Betrayed Women, Christina Hoff Sommers actually tried to track to their sources numbers spread by feminist activists. One of the much- reported stories she debunked was that 150,000 women a year die of anorexia, which an outraged Gloria Steinem reported in her popular book Revolution from Within. Steinem cited another popular feminist tome by Naomi Wolf as her source; Wolf cited a book about anorexia written by a women’s studies academic, which cited the American Anorexia and Bulimia Center. Sommers actually checked with that group and discovered that all they’d said was that many women are anorexic. Oops.

Another feminist canard is that domestic violence is responsible for more birth defects than all other causes combined. Time and many newspapers had ascribed that finding to a March of Dimes report. Sommers tracked the assertion back through three sources, beginning with the Time reporter, and discovered that it was the result of a misunderstanding of something that had been said in the introduction of a speaker at a 1989 conference—no such March of Dimes report existed. Still, the errors of Time and the Boston Globe and the Dallas Morning News are in more clip files and data banks than is Sommers’s debunking. The march of that particular error will doubtless continue.

A third famous feminist factoid is that Super Bowl Sunday sees a 40 percent rise in cases of wife beating. That claim, said to be supported by a university study, was made in an activist press conference. (The story was also spread by a group ironically named Fairness and Accuracy in Reporting.) Similar claims began coming from other sources. Ken Ringle of the Washington Post took the time to double-​check them and found that the university study’s authors denied that their study said any such thing and that the other sources that claimed to have independent confirmation of the “fact” refused to disclose their data. When a concerned activist makes up a number, few bother to be skeptical, and credulous reporting tends to drown out the few debunkers.

Unfortunately, erroneous numbers in journalism are not always the result of sincere attempts to quantify the relevant data. If you can’t imagine someone’s making the effort to really count something, and if you can imagine any reason for the source’s having an ulterior motive, best take the number with a large grain of salt. This is not a call for ad hominem attacks; it is merely a warning about when to look especially askance at numbers. Even when one is following what seems on its face to be defensible standards of sample and extrapolation, ludicrous results can ensue. For example, Robert Rector of the Heritage Foundation wrote that 22,000 Americans below the poverty line had hot tubs, and many conservative publications uncritically trumpeted the figure. But Rector’s figure was “extrapolated” from one case in a survey sample. It’s disingenuous to claim that because one poor family in a sample of 10,000 has a hot tub, 22,000 poor families have hot tubs.

Another example of numbers being attached to the uncounted, and probably uncountable, is the debate over species extinctions. Economist Julian Simon has explained that the conventionally accepted figures on the number of species disappearing yearly are based on no counts and no extrapolations from past knowledge; they are based on guesses about the current rate of extinction, and that rate is arbitrarily increased to produce the frightening number of 40,000 per year. Norman Myers, one of the leading promulgators of that figure, admits that “we have no way of knowing the actual current rate of extinction in tropical forest, nor can we even make an accurate guess.” Yet he is willing to make guesses about future rates.

Another much-​touted scare figure, on workplace violence, was recently debunked in the pages of the Wall Street Journal. Reporter Erik Larson found that reports and statistics on the prevalence of workplace violence were shoddy or misleading in various respects. One report, which concluded that workers have a one-​in-​four chance of being attacked or threatened at work, was based on the replies of only 600 workers, who represented only 29 percent of the people whom the survey had tried to reach, which made the groups largely self-​selected within the original sample. Statisticians frown, with reason, on self-​selected samples, which are very likely to be biased.

Larson also found that a Bureau of Labor Statistics report, which said that homicide is the second most frequent cause of death in the workplace, far from referring to coworkers or disgruntled ex- coworkers blasting away at their comrades, showed that three-​quarters of the deaths occurred during robberies, and that many others involved police or security guards, whose jobs obviously are dangerous. But the media, and an industry of self-​serving workplace violence consultants, inspired by half-​understood studies and vivid memories of crazed postal workers, created an aura of offices as the Wild, Wild West that caught the imagination of many. In this case, data were not so much bogus or warped as wildly misinterpreted.

Checking the Checkers

It might seem paradoxical to condemn journalists for incessantly parroting errors when it is journalists themselves who occasionally expose errors. After all, who else would? The problem is, they don’t do it nearly enough, and no one else ever does. Even though Larson’s story appeared in the October 13, 1994, Wall Street Journal, it’s a given that many other writers and TV reporters will have missed it and sometime in the future will again parrot false suppositions about the danger of mortal violence in the workplace.

The culture of journalism is based on the principle of the citation or quote: if someone else said it, or wrote it, it’s okay to repeat it. Almost any editor or writer would scoff at that brash formulation. After all, journalists pride themselves on their withering skepticism, their credo of “if your mother says she loves you, check it out.” But the reader would be terribly naive to believe that journalists, under the crush of daily deadlines, under the pressure of maintaining long-​term relationships with sources, and occasionally under the spell of ideology, always meet that standard. In the future, you can count on it, someone will go back to some story about workplace violence, or the homeless, or wife beating, written before the debunking was done, and come to an incorrect conclusion. Dogged checking of sources is rare indeed.

I recently was intrigued by a figure in our self-​styled paper of record, the New York Times. In an October 25 article about the miserable state of Iraq after years of international embargo, the author, Youssef M. Ibrahim, stated that, according to UNICEF, “in the last year there has been a 9 percent rise in malnutrition among Iraqi infants.”

That figure struck me as somewhat absurd, a foolhardy attempt to assert precise knowledge in a situation where obtaining it would be extremely difficult, if not impossible. I tried to track the figure back to its source through the UNICEF bureaucracy. (There is a practical reason why many journalists end up accepting things at face value: the tracking of figures, especially through international bureaucracies, can be harrying and time-​consuming indeed.) I was rewarded; although my initial supposition—that any alleged count was probably of dubious value—is probably true, I discovered that the “paper of record” couldn’t even read the UNICEF report right.

What UNICEF had actually said, with even more absurd precision, was that the total rate of—not the increase in— malnutrition among infants under one year old was 9.2 percent—a figure that seems shockingly low for an essentially Third World country suffering under an international embargo. It turned out that the survey was not done by UNICEF, as the Times had reported, but by UNICEF in collaboration with the government of Iraq—as almost anything done in Iraq probably must be. Precise figures from lands with tyrannical governments should never be trusted. And it should be remembered that in any hierarchy, even if the person at the top doesn’t have the literal power of life and death over those on the bottom, there’s a general tendency to tell those higher up only what they want to hear.

Given the preceding examples, you’d think that constant checking and rechecking of the sources of claims would be the rule in journalism. Unfortunately, it is not. Nor, apparently, is it in science. In Betrayers of the Truth, William Broad and Nicholas Wade reported on fraud and deceit—and acceptance of the same—in the scientific establishment. They found that, like journalism’s conceit about checking on whether your mother loves you, science’s conceit of being built on an elaborate system of cross checking and confirming the results of others is mostly a myth. Hardly anyone ever checks what other people claim to have found or done.

All too often readers assume that everyone is doing his work scrupulously and well, but unfortunately, that’s not always the case, as Broad and Wade, Sommers, Fumento, Larson, Farber, and others have shown. Readers should be much more skeptical than they are.

Almost every time I read a newspaper story about a topic of which I have personal knowledge, or about an event that I’ve witnessed, I find errors—sometimes in minor details, sometimes in key ones. Almost everyone I’ve asked about this says the same. But our knowledge of journalistic error in a few specific cases doesn’t translate into a strong general skepticism.

Total skepticism is probably impossible. But greater awareness of the sorts of errors journalists tend to make can only help. Watch out for macroeconomic aggregates; try to figure out where huge counts are coming from and how they are being made; try to check the methodology and phrasing of polls; check on the self-​interest of the groups that promulgate scary numbers; and remember that scary stories make great copy and should be mistrusted all the more for that reason.

If journalism were merely entertainment, this wouldn’t be so important. But despite how bad they are at it, journalists’ conceit about their key role in public policy is, unfortunately, true. Bad information can only lead to bad policy. The first step in an intelligent approach to public policy is to get the facts as straight as we can, even when we don’t have precise numbers.

This article originally appeared in the January/​February 1995 edition of Cato Policy Report.