Imagine that you can only know what you discover by yourself through trial and error. There are no websites, no books, no teachers, not even other people to talk to. How much could you know about the world? Do you think you could figure out which chemical molecules compose your food? What was happening in another country? What the climate was like 30 years ago? How to hunt or farm? As Philosopher John Hardwig says, “we are irredeemably epistemically dependent on each other.” That is to say, just about everything we know about the world we learn from other people. We depend on other people for knowledge.
This is particularly true as the world gets more complex. There’s simply too much for one person to know on their own and so people start to specialize in areas of knowledge. Hence, we have chemists, physicists, doctors, lawyers, geologists, mechanics, accountants, to name but a few—and, of course, philosophers. Just about everything you know about the world you learned from someone else. This is a good strategy too. There isn’t time or energy for you to get a PhD in every single domain of human knowledge.
Sometimes a problem emerges: There are cases where it’s not clear who the experts are on a particular issue or where two people who appear to be experts disagree. How does the non-expert identify who the genuine experts are? Who should they defer to?
In this post I do two things. First, drawing on a model from cultural anthropology, I explain the strategies that people use to identify experts. Second, I explain how anti-science propagandists manipulate these strategies to confuse the public.
[For more information and educational resources on science denialism and anti-science propaganda visit my separate growing website dedicated to the topic]
II. Heuristics for Identifying Experts
Boyd and Richerson’s (1985) model of cultural learning explains, among other things, how and when we defer to others. Their costly information hypothesis holds that
when accurate information is unavailable or too costly [for individuals to learn something on their own], individuals may exploit information stored in the behavior and experience of other members of their social group. (my italics for emphasis. (Henrich & McElreath 2003).
To place this in our contemporary informational environment, rather than earn a PhD in every domain, it’s much easier to investigate what experts in that domain believe.
Notice that the scope of ‘social group’ is rather nebulous. However, under conditions of high social sorting and polarization, we can expect ‘social group’ to be defined rather narrowly. The greater the distance between social groups, the less likely members from one group are to defer to members or institutions from another. For example, in the US where the population is more socially polarized than at any other point in the nation’s history (Mason 2018), it’s unlikely that members of one political identity will defer (on a politicized issue) to a member or institution perceived as belonging to the other group.
For example, Rightwing partisans are unlikely to defer to climate scientists since they don’t see them as being members of their own group. This distrust shows up in attitudes toward universities and university professors: About 60% of Republicans think universities have a negative effect on the country (compared to 67% of Democrats think they have a positive effect). And 19% of Republicans have no confidence at all in university professors to act in the public interest and 33% have not too much confidence (while 26% of Democrats say they have a great amount of confidence and 57% have a fair amount) (Pew 2018).
In our information environment, scientific knowledge is costly (in that it requires a lot of effort to obtain through individual trial and error). As I’ve said, a citizen cannot pursue a PhD in every field. It follows from the costly information hypothesis that on scientific matters citizens will engage in social learning and defer. Once they’ve decided to learn from others, various contextual cues bias them toward learning from one subgroup or individual rather than another. Adaptive information is embodied in both who holds ideas and how common those ideas are. These in turn underpin the prestige and frequency bias, respectively (Boyd & Richerson 1985).
The prestige bias is actually a proxy for the success bias (i.e., defer to the most successful person at a task). But when the ability to rank individuals by outcome in a particular skill or activity is too difficult, individuals “use aggregate indirect measures of success, such as wealth, health, or local standards” (Henrich & McElreath 2003; my italics for emphasis). The fact that prestige is only an indirect measure of skill implies that it will often be unclear which of a revered individual’s many traits led to (perceived) success.
To situate this in our current world, the fact that an individual has a media presence (prestige) may mislead many to believe that individual is an expert in a domain when in fact they aren’t. The prestige bias explains why so many (erroneously) defer to celebrities for health issues. Similarly, because prestige is defined by local standards, in a socially sorted and polarized society, people will likely not defer to experts outside their own group. It follows that, under such conditions, many will likely defer to the wrong experts on complex politicized empirical issues.
The prevalence of the success and prestige biases creates pressure for success-biased learners to pay deference to those they perceive as highly skilled (Henrich & McElreath 2003). The spread of deference-type behaviors means that naive entrants “may take advantage of the existing patterns of deference by using the amounts and kinds of deference different models receive as cues for underlying skill “(Ibid). So, local (i.e., ingroup) standards of prestige combine with patterns of deference to give (fallible) signals to non-experts about who the experts are.
Once again, we can see how these patterns instantiate themselves and fail in our current environment. On politicized issues, non-experts in a socially sorted and polarized society will defer to different experts and institutions based on ingroup prestige standards and patterns of deference. Few partisans, if any, will defer to individuals or institutions that their outgroup perceive as experts. On partisan issues where there is a consensus of genuine experts, one group will likely defer to the wrong individuals and institutions despite their perceptions to the contrary.
The success and prestige bias do not solve every costly information problem. In our current environment this problem emerges when two purported experts on either side of an issue both work at prestigious universities or institutions and/or both have media presence. Which to believe?
The successful heuristic in these situations is to copy the behaviors, beliefs, and strategies of the majority (Ibid). In information-poor environments (with respect to who has relative prestige/success) the conformist/frequency bias is a successful strategy. The conformist bias is so pervasive that it is an even more common form of learning than vertical transmission (i.e., parent to child) (Ibid.).
Again, like all heuristics they can be maladaptive, depending on the environment. In a highly socially sorted and polarized society, the conformity bias will likely apply only to the behaviors and beliefs of one’s ingroup rather than those of the outgroup. If a majority in one group has false or improbable beliefs, the frequency bias predicts that other members will also defer to the majority.
III. Belief Polarization and Trust
Using a mathematical model developed by Venkatesh Bala and Sanjeev Goyal, Cailin O’Connor and James Weatherall (2019) investigated a similar issue. They modeled how scientific communities either converge or polarize on beliefs in order to study how belief polarization occurs and misinformation spreads.
One motivation behind the project is that the scientific community adheres to rigorous epistemic norms (relative to lay people) and so if some variables can cause belief polarization and misinformation in these communities then it is bound to occur in the general population. The Bala-Goyal model is based on the Bayesian idea that we update our credence levels when others share new information with us.
An important finding aligns with what I suggested might occur in a sorted and polarized society. Their models found that when subgroups within a community distrust each other they appraise evidence differently depending on its source. That is, evidence from a trusted source (i.e., in-group) can move credence levels one way while the same evidence but from a distrusted source can move credence levels in the other direction!
This makes sense. If you believe that a lab or scientist is corrupt then it is reasonable to assume they’ve fabricated or manipulated their results and to revise your credence levels in the other direction. The end result is stable belief polarization within the community: One subgroup converges on the correct view while the other converges on the false one. The greater the mistrust, the larger the faction that settles on the false belief. This occurs because “those who are skeptical of the better theory are precisely those who do not trust those who test it” (O’Connor and Weatherall 2019). The group converging on false beliefs becomes insensitive to countervailing evidence.
Several important conclusions follow from their Bayesian models that incorporate social trust. First, “models of polarization […] strongly suggest that psychological biases [such as confirmation bias] are not necessary for polarization to result (Ibid p. 76; my italics for emphasis). Second, while distrust can cause us to dismiss relevant evidence, too much trust can also lead you astray “especially when agents in a community have strong incentives to convince you of a particular view” (Ibid p. 77).
IV. How and Why Anti-Science Propaganda Works
If you’ve followed so far you should be starting to see how anti-science propaganda works. One important way is to manipulate trust through the prestige bias. Anti-science propagandists will present their experts as prestigious. But most importantly, for anti-science propaganda to work they must diminish the prestige and, therefore, trust in genuine experts.
This is why every single anti-science propaganda campaign targets academic institutions, high profile public scientists, and public regulatory institutions like the FDA, EPA, and CDC. Go through the list of prominent denialisms like anthropogenic global warming deniers, anti-GMO advocates, anti-vaxxers, and evolution deniers and you will quickly identify this pattern. People will only trust the denialist experts if they can be convinced to distrust legitimate experts.
[Note: Some of people will invariably and correctly point to instances where the above institutions or a consensus of experts got it wrong. Yes, it’s true. Institutions and experts make mistakes. However, what’s important from the point of view of the nonexpert is the relative error rates of who to defer to. Compared to other individuals or institutions, which is more likely to get it right? This is a long and complicated topic that I can’t do justice to here but for now, thinking about relative error rates is one way to conceive of erroneous deferrals.]
Propagandists also manipulate people by manipulating the frequency bias. When the information environment is unclear with respect to what to believe (usually deliberately muddied by denialist propaganda) people will defer to the most common belief in their information environment. As an historical illustration, the tobacco industry for years would take out full page ads in major newspapers to proclaim that “the science isn’t settled” (sound familiar?) on the link between smoking and cancer. The intent is to increase the frequency of the idea in the information environment causing people to defer.
Propagandists also manipulate the frequency bias compiling lists of “experts” that contest the consensus view. For example, you’ll often see lists of scientists that question anthropogenic global warming, vaccine safety, GMOs, and evolution. The interest groups that compile these lists know that most people will see the word “scientist” and stop there.
However, if you look at the actual people on these list they are people from a hodge podge different disciplines and with varying degrees of credibility. Although these lists occasionally do contain a very small handful of relevant experts, most are not experts in the domain at issue. We wouldn’t ask a mechanic for advice on our taxes so why do we care what an engineer says about vaccines? The point of these lists is to create the illusion of expert disagreement by manipulating the frequency bias.
In our current information environment the situation is even more pernicious that it was at the time of the tobacco industry’s propaganda. Propagandists can mobilize bots to tweet and post links or comments thereby increasing the frequency of their ideas. Our guard is down because they resemble real people (i.e., manipulating trust). At least with the tobacco newspaper ads people could be suspicious of the obvious vested interests. Even more pernicious is that people are now renting out their FB accounts. So now, propagandists could be manipulating the frequency and prestige bias through someone you trust or, at least, have no reason to distrust.
My assessment of science denialists has changed a lot since I started writing my dissertation on anti-science propaganda 5 years ago. I used to think they were stupid and culpable. My position has changed 180 degrees. I now believe these people are victims of sophisticated and well-funded manipulation campaigns that prey on social trust and our necessary reliance on others for knowledge.
There also may be degrees of culpability–it seems like some people really should know better. Maybe I’ll post more on the ethics of belief another time. That said, I’ve found that reframing (lay) science denialists as victims of manipulation allows me to be much more patient and sympathetic with them in conversation. My research projects going forward all revolve around figuring out how to talk to and help people who have been targeted by propaganda.
- Cailin O’Connor and James Weatherall. The Misinformation Age.
- Robert Boyd and Peter Richerson. Culture and the Evolutionary Process.
- Joseph Henrich and Richard McElreath. The Evolution of Cultural Evolution in Evolutionary Anthropology 12: 123-135.
- Mason, L. (2018). Uncivil agreement: How politics became our identity. Chicago, IL: The University of Chicago Press.