Good reporters strive to write balanced stories, presenting all sides of a story in as unbiased a way as possible. But this can be controversial in science reporting if the overwhelming body of evidence suggests that one viewpoint is, well, wrong.
For example, some people believe that global climate change is a hoax and that vaccines do more harm than good. But the vast majority of scientific evidence tells us that climate change is real and that vaccines offer enormous health benefits with very little risk.
For the purposes of this post, I’ll define “false balance” as when a reporter presents two (or more) opposing viewpoints in a story, even if one of those viewpoints is not supported by evidence. What are the ramifications of false balance? Does it make a reporter seem more credible as a source of information, because all viewpoints are represented? Does it confuse readers – even if reporters make clear that one viewpoint is unsupported by scientific evidence?
A recent study from researchers Katherine Kortenkamp and Bianca Basten at the University of Wisconsin-La Crosse explored the issue, and the findings are interesting. A paper on the work, “Environmental Science in the Media: Effects of Opposing Viewpoints on Risk and Uncertainty Perceptions,” was published online March 10 in the journal Science Communication. [Note: While I’m presenting an overview of the work here, I encourage you to check out the paper itself – it’s a well-written and engaging read.]
The abstract sums the issue up neatly: “Media reports of environmental science often give equal weight to opposing viewpoints, which can make the science seem more controversial than it actually is.”
Reporters often try to address this issue by offering an opposing viewpoint, but making clear that the opposing viewpoint has been largely discredited. Reporters can do this in a variety of ways. For example, a reporter may note that the viewpoint is in the distinct minority (i.e., very few people believe it), that the person presenting the viewpoint has a clear conflict of interest, or by presenting the viewpoint and subsequently noting that there are no facts to support it.
But a reporter’s efforts to both offer and discredit an opposing viewpoint are complicated by something called the “continued influence effect.” As Kortenkamp and Basten explain in their paper: “Research has shown that individuals find it difficult to ignore information that is given to them but then subsequently disconfirmed.” That’s the continued influence effect.
The researchers conducted three experiments. In each experiment, study participants were asked to read one of three mocked-up news stories about a fictional environmental science subject that may have human health risks. The fictional science subject was different in each experiment. But in all three experiments, some participants received a version of the story presenting only one scientific viewpoint; some received a version presenting two balanced viewpoints; and some received a version with two viewpoints, one of which was “discredited.”
In each experiment, after reading the news story, study participants were asked to answer questions regarding risk perceptions, risk probability, scientific uncertainty, the credibility of the scientists in the story, and the credibility of the journalist who wrote the story.
Study participants were 247 college students.
The researchers came into the study with one research question and four hypotheses.
Hypothesis One: “The presence of an opposing scientific viewpoint, whether discredited or not, would cause participants to view the science as more uncertain than if only one scientific viewpoint was presented in a news story.”
Verdict: The hypothesis holds up. Presenting an opposing viewpoint made participants think that there was less certainty in the scientific community about health risks – even if the story made clear that one of the viewpoints was not reliable.
Hypothesis Two: “Participants would perceive discredited scientists as less credible and more biased than nondiscredited scientists in news stories containing unbalanced opposing scientific viewpoints.”
Verdict: This one’s interesting. In two of the three experiments, participants thought the nondiscredited scientist was more credible than the discredited scientist. But in the third experiment, discrediting one of the scientists in the story had no effect on the credibility ratings of either scientist.
Even more remarkable, in my opinion, was that scientists whose views were presented without opposition (i.e., the scientists in the single-viewpoint stories) were perceived as more credible than the nondiscredited scientists in stories where two viewpoints are presented and one is discredited.
In other words, introducing a second viewpoint makes the first scientist seem less credible – even if the second viewpoint is clearly discredited.
Hypothesis Three: “Risk perceptions for stories with unbalanced opposing viewpoints (in which one is discredited) would be more similar to those with balanced opposing viewpoints than to those with only one viewpoint.”
Verdict: In two of the three experiments, participants’ “risk perceptions in response to discredited viewpoint stories were more similar to their risk perceptions in response to balanced stories than [to single viewpoint] stories.” In the third experiment, risk perceptions in response to discredited stories split the difference between single viewpoint stories and balanced stories.
Hypothesis Four: “The effects of discrediting on participants’ risk perceptions and credibility ratings would be moderated by their environmental beliefs.”
Verdict: Nope. There was no significant effect based on a participant’s environmental beliefs.
The Research Question: “Would [study] participants view the journalist as more credible and objective when opposing scientific viewpoints were presented compared to when only one viewpoint was presented in a news story?”
Verdict: While reporters were seen as being more biased if they only presented one viewpoint, this had no effect on their perceived credibility. As the paper states: “journalists’ credibility ratings [as a source of information about the environmental risk in the story] were not influenced by the presence or absence of opposing viewpoints.”
In short, as Kortenkamp and Basten note, “It seems that participants viewed bias and credibility as somewhat independent constructs.”
This study is relatively small, and did not use a nationally representative group of participants. But the findings are important (and I hope someone does a larger, national representative follow-up study).
In short, the study indicates that science/health reporters who incorporate false balance into a story in order to be seen as impartial are likely to confuse their readers about the actual state of the science – and will not be seen as any more credible by their readers.
Impartiality is an essential component of good journalism. So are objectivity, accuracy, and critical thinking.
At some point, reporters stopped noting there is a minority viewpoint that the world is flat or that the sun revolves around the Earth. Does that mean those reporters are biased? A reasonable person would say no.
Where is the line? At what point does scientific consensus reach the threshold where we no longer have to pay homage to opposing viewpoints? I don’t know. Reporters and editors will have to draw their own conclusions.
But as heated debate over issues such as vaccination and climate change continues, I hope they’re thinking about it.