The Substantial Costs and Minimal Benefits of False Balance

Image credit: Colin Harris, via Flickr. Image shared under a Creative Commons license. Click for more information.
Image credit: Colin Harris, via Flickr. Image shared under a Creative Commons license. Click for more information.

Good reporters strive to write balanced stories, presenting all sides of a story in as unbiased a way as possible. But this can be controversial in science reporting if the overwhelming body of evidence suggests that one viewpoint is, well, wrong.

For example, some people believe that global climate change is a hoax and that vaccines do more harm than good. But the vast majority of scientific evidence tells us that climate change is real and that vaccines offer enormous health benefits with very little risk.

For the purposes of this post, I’ll define “false balance” as when a reporter presents two (or more) opposing viewpoints in a story, even if one of those viewpoints is not supported by evidence. What are the ramifications of false balance? Does it make a reporter seem more credible as a source of information, because all viewpoints are represented? Does it confuse readers – even if reporters make clear that one viewpoint is unsupported by scientific evidence?

A recent study from researchers Katherine Kortenkamp and Bianca Basten at the University of Wisconsin-La Crosse explored the issue, and the findings are interesting. A paper on the work, “Environmental Science in the Media: Effects of Opposing Viewpoints on Risk and Uncertainty Perceptions,” was published online March 10 in the journal Science Communication. [Note: While I’m presenting an overview of the work here, I encourage you to check out the paper itself – it’s a well-written and engaging read.]

Background

Do we still have to acknowledge flat earth advocates? Image via Wikimedia Commons. Click for more information.
Do we still have to acknowledge flat earth advocates? Image via Wikimedia Commons. Click for more information.

The abstract sums the issue up neatly: “Media reports of environmental science often give equal weight to opposing viewpoints, which can make the science seem more controversial than it actually is.”

Reporters often try to address this issue by offering an opposing viewpoint, but making clear that the opposing viewpoint has been largely discredited. Reporters can do this in a variety of ways. For example, a reporter may note that the viewpoint is in the distinct minority (i.e., very few people believe it), that the person presenting the viewpoint has a clear conflict of interest, or by presenting the viewpoint and subsequently noting that there are no facts to support it.

But a reporter’s efforts to both offer and discredit an opposing viewpoint are complicated by something called the “continued influence effect.” As Kortenkamp and Basten explain in their paper: “Research has shown that individuals find it difficult to ignore information that is given to them but then subsequently disconfirmed.” That’s the continued influence effect.

The Study

The researchers conducted three experiments. In each experiment, study participants were asked to read one of three mocked-up news stories about a fictional environmental science subject that may have human health risks. The fictional science subject was different in each experiment. But in all three experiments, some participants received a version of the story presenting only one scientific viewpoint; some received a version presenting two balanced viewpoints; and some received a version with two viewpoints, one of which was “discredited.”

In each experiment, after reading the news story, study participants were asked to answer questions regarding risk perceptions, risk probability, scientific uncertainty, the credibility of the scientists in the story, and the credibility of the journalist who wrote the story.

Study participants were 247 college students.

Results

The researchers came into the study with one research question and four hypotheses.

Hypothesis One: “The presence of an opposing scientific viewpoint, whether discredited or not, would cause participants to view the science as more uncertain than if only one scientific viewpoint was presented in a news story.”

Verdict: The hypothesis holds up. Presenting an opposing viewpoint made participants think that there was less certainty in the scientific community about health risks – even if the story made clear that one of the viewpoints was not reliable.

Hypothesis Two: “Participants would perceive discredited scientists as less credible and more biased than nondiscredited scientists in news stories containing unbalanced opposing scientific viewpoints.”

Verdict: This one’s interesting. In two of the three experiments, participants thought the nondiscredited scientist was more credible than the discredited scientist. But in the third experiment, discrediting one of the scientists in the story had no effect on the credibility ratings of either scientist.

Even more remarkable, in my opinion, was that scientists whose views were presented without opposition (i.e., the scientists in the single-viewpoint stories) were perceived as more credible than the nondiscredited scientists in stories where two viewpoints are presented and one is discredited.

In other words, introducing a second viewpoint makes the first scientist seem less credible – even if the second viewpoint is clearly discredited.

Hypothesis Three: “Risk perceptions for stories with unbalanced opposing viewpoints (in which one is discredited) would be more similar to those with balanced opposing viewpoints than to those with only one viewpoint.”

Verdict: In two of the three experiments, participants’ “risk perceptions in response to discredited viewpoint stories were more similar to their risk perceptions in response to balanced stories than [to single viewpoint] stories.” In the third experiment, risk perceptions in response to discredited stories split the difference between single viewpoint stories and balanced stories.

Hypothesis Four: “The effects of discrediting on participants’ risk perceptions and credibility ratings would be moderated by their environmental beliefs.”

Verdict: Nope. There was no significant effect based on a participant’s environmental beliefs.

The Research Question: “Would [study] participants view the journalist as more credible and objective when opposing scientific viewpoints were presented compared to when only one viewpoint was presented in a news story?”

Verdict: While reporters were seen as being more biased if they only presented one viewpoint, this had no effect on their perceived credibility. As the paper states: “journalists’ credibility ratings [as a source of information about the environmental risk in the story] were not influenced by the presence or absence of opposing viewpoints.”

In short, as Kortenkamp and Basten note, “It seems that participants viewed bias and credibility as somewhat independent constructs.”

My Thoughts

This study is relatively small, and did not use a nationally representative group of participants. But the findings are important (and I hope someone does a larger, national representative follow-up study).

In short, the study indicates that science/health reporters who incorporate false balance into a story in order to be seen as impartial are likely to confuse their readers about the actual state of the science – and will not be seen as any more credible by their readers.

Impartiality is an essential component of good journalism. So are objectivity, accuracy, and critical thinking.

At some point, reporters stopped noting there is a minority viewpoint that the world is flat or that the sun revolves around the Earth. Does that mean those reporters are biased? A reasonable person would say no.

Where is the line? At what point does scientific consensus reach the threshold where we no longer have to pay homage to opposing viewpoints? I don’t know. Reporters and editors will have to draw their own conclusions.

But as heated debate over issues such as vaccination and climate change continues, I hope they’re thinking about it.

6 thoughts on “The Substantial Costs and Minimal Benefits of False Balance

  1. Pingback: [BLOCKED BY STBV] The Substantial Costs and Minimal Benefits of False Balance | Sandora News Aggregator Portal

  2. Pingback: [BLOCKED BY STBV] Morsels For The Mind – 10/04/2015 › Six Incredible Things Before Breakfast

  3. Pingback: [BLOCKED BY STBV] Morsels For The Mind – 10/04/2015 | Sandora News Aggregator Portal

  4. Bob Roehr

    I’m much less impressed with the “study,” which I cannot read because it is behind a pay wall. But it seems to assume that the reader comes to the article as a blank slate, without existing opinions on the topic (or at least there is no baseline evaluation of those views in the study) and the article is the sole source of information on the topic.

    To me, it seems at least as likely that the survey outcomes represent the participants’ existing views prior to reading the article as they represent effect from reading the article.

    It is a sorry state when a science writer, writing for others interested in science, looks so uncritically at a piece of research and then promotes it.

    Like

  5. I agree that paywalls are a pain in the neck. However, as I noted in the post above, the researchers did address whether “The effects of discrediting on participants’ risk perceptions and credibility ratings would be moderated by their environmental beliefs.”

    In other words, the study did test whether (as you put it) “the survey outcomes represent the participants’ existing views prior to reading the article.”

    And what the researchers found was that the study participants’ previous beliefs (or prior existing views) did not have a significant effect on their responses.

    Of course, you may have meant that the study didn’t account for whether participants already found scientists to be credible. That doesn’t really apply in the context of the study, since it was evaluating whether participants found a scientist to be more or less credible depending on how a (fake) news story presented information. In other words, it wasn’t measuring credibility as an absolute value, but as a relative value.

    All in all, I don’t think this study is the final word on anything (is any study ever the final word on a subject?). As I wrote above, the study has significant limitations: “This study is relatively small, and did not use a nationally representative group of participants.” But that doesn’t mean it’s not interesting. And I do hope that someone does a larger, national representative follow-up study.

    Like

  6. Bob Burk

    I have no problem with the concept of climate change. Decades ago in college we were told the climate was changing and this would affect future agriculture production. My problem whenever I see an article such as this is that I recall the story of Alfred Wegener and continental drift. In 1912 Wegener proposed the concept of continental drift. The scientist of the day were appalled that anyone would dare question their established viewpoint. You were allowed to add to their accepted facts, but not conceive of something totally new. The mainstream scientists did everything they could to discredit Wegener. They threatened boycotts against magazines that published Wegeners theories and mocked him at any meetings he was allowed to present at. Textbooks did not mention his concept as “further discussion of it merely incumbers the literature and befogs the mind of fellow students.”
    Sound familiar?
    So yes, the climate is (always) changing. The questions are, how much is due to humans; and how much should we pay to try and alter the change? My first question can have a measurable answer. The second question is highly subjective.

    Like

Leave a comment