Selective Sharing

Don’t miss the forest for the trees…

What is It?

Selective sharing involves sharing and promoting independent research to frame an issue in a way that is congenial to the propagandist. Drawing on independent research avoids people’s natural suspicions of interest-group funded or produced research (see: biased production). It also makes it harder to detect as propaganda.

For a full set of educational resources and case studies to learn about anti-science propaganda go to my growing website:

How Does Selective Sharing Work?

Selective sharing does two things:

  1. It creates the appearance of disagreement/uncertainty in the scientific literature.
  2. It frames information in a way that distorts its relevance or meaning.

Selective strategy is particularly effective for complex scientific and policy issues. For complex issues where there is a consensus of experts, there will usually be a deep and broad literature exploring a particular issue in a variety of ways. Also, in just about every body of scientific literature, there are inconclusive studies, ranges of effect sizes, and ranges of predictions from different models. This means there will always be some studies that have findings contrary to the general trend in the literature.

If I tell you that 50 studies found no or only inconclusive relationships between X and Y, you’d correctly suppose that’s strong evidence for believing there’s no likely relationship. However, now suppose I also reveal to you that there are also 2500 studies demonstrating a strong relationship between X and Y and that many of the positive studies are of better quality than the previous 50. Now what should you believe?

Propagandists seize on these outlier independent studies and share them to the media through their PR institutions, natural allies, and online. Essentially, they are boosting the prevalence of the outlier studies in the public information environment (see: Frequency bias). This in turn creates the impression to the public that an outlier study is representative of the scientific literature on the issue. The public (or subgroups of the public) frequently hears about the outlier studies but those studies are rarely presented within the context of the full body of literature.

Selective sharing takes other forms too. The basic strategy is still the same though: misleading framing of information. Sometimes a subsection of a study will be cherry-picked while ignoring the broader conclusion. One of the most common methods, however, is to present other possible causes of a phenomena in addition that which is established in the literature.

Let’s take a historical example. By the early 1950s it was well-established that tobacco caused lung cancer. By the mid 1970s even (unpublished) internal tobacco industry research acknowledged this relationship. Nevertheless, the tobacco industry worked hard to create the illusion of scientific uncertainty and disagreement. A major part of this strategy involved selective sharing. This is how they did it:

Anytime an independent study found other causes of lung cancer–for example asbestos, various industrial chemicals, pollution–the Tobacco Institute (the PR arm of the tobacco industry) would spend a lot of money publicizing these studies and promoting them to the media. The purpose is to create doubt and confusion about the scientific consensus: If many other things cause lung cancer, then how can we be sure that various cases of lung cancer are caused by tobacco and not these other variables? (If you ignored the vast literature and consensus among every major scientific body at the time, that is).

We can see this strategy all over the place once we recognize it. Here are some common instances:

  1. Anytime there are wild fires, you will see a massive increase in online articles pointing to all the other causes/contributing factors of wildfires besides climate change. Yet they never share the 10s of thousands of articles that find that human caused CO2 emissions are the primary driver of climate change. Nor do they share the articles that argue that climate change creates the conditions for greater and more frequent fires–regardless of how they’re started.
  2. Climate change deniers will also share/publish articles on other the other variables that can affect climate: “It’s the sun cycles! The climate has always changed!” Yet they never share the 10s of thousands of articles that find that human caused CO2 emissions are the primary driver of climate change.
  3. Opponents of vaccines fill our newsfeeds anytime there is even the weakest correlation between a child getting sick and their having been vaccinated. Yet we don’t see them posting articles showing that the vast majority of people have no adverse effects or articles documenting the virtual elimination of many previously deadly diseases.
  4. The fossil fuel industry (and allies) will often share memes or studies showing how many birds die from windmills. The implied message is that people who support wind energy are hypocrites because they purport to care about nature, but look at all the bird deaths caused by wind energy. Surprisingly, these same people never share the studies that show the number of birds that die due to fossil fuel pollution.

Again, what makes selective sharing such a successful strategy is that everything the propagandist shares usually comes from a credible independent source and, most importantly, nothing the propagandist selectively shares is false. However, selective sharing misleads because it frames the issue in a way that distorts what’s really going on. (See the last section of this page for a more detailed explanation of how the framing is misleading).

Modeling Selective Sharing in an Information Ecosystem

In The Misinformation Age, O’Connor and Weatherall (2019) used a Bayesian model (Bayesian models represent the way we update our beliefs/credence levels as we become aware of new evidence) (Bala-Goyal model) to see how beliefs spread from a scientific community to policymakers and the public. They also modeled what occurs if propagandists infiltrate the information ecosystem. They found that in a wide variety of cases, a propagandist using selective sharing alone (without biased production) cause naive policymakers to converge on the false belief despite there being a scientific consensus to the contrary (p. 112-113). The effect is even stronger when biased production and selective sharing are conjoined.

So far, the Bala-Goyal model of the epistemic community includes policymakers who have some background knowledge of the science. However, when we extend these models to members of the public with little working background knowledge and little to no direct contact with the scientific community, outcomes are even worse with respect to converging on the false belief. Selective sharing is particularly effective with manipulating the public because the shared research comes from independent sources giving it the veneer of legitimacy. The public’s guard is down.

How to Avoid Falling Victim to Selective Sharing

To avoid falling victim to the distorting effects of selective sharing, we need strategies that properly contextualize the information being shared. This is the main problem for the non-expert. If you aren’t an expert then almost by definition you don’t have a deep working knowledge of the relevant body of scientific literature that would allow you to properly frame the shared studies.

Luckily, there are a few short cuts you can take:

  1. Find out whether there is a strong, medium, or weak consensus among relevant experts. If there’s a strong or medium strong consensus, the odds favor the expert view.
  2. Find out whether there’s a trend in the literature. Look at meta-analyses and systematic reviews rather than individual studies.

Critical Thinking Bonus

Selective sharing is a subspecies of slanting by omission. Important evidence is left out which would allow someone to properly evaluate the matter. To avoid falling prey to slanting by omission (and hence selective sharing) we must employ the total evidence requirement. The total evidence requirement means answering, “what are all the variables we would need to know about make a reasonable judgment on this issue?” This is tricky and takes training. It’s especially hard if you aren’t already an expert on the issue since you won’t always know what questions to ask. But this doesn’t mean we can’t make some progress.

To illustrate how to use the total evidence requirement, let’s use the common example of selective sharing used by the fossil fuel industry against wind energy. The selective sharing strategy makes the following inference:

Windmills kill birds, therefore we should not use windmills.

The first thing we need to do is extract the implied premise:

If a power source kills birds, then we should not use it.

Is this true? Is this the only variable that matters in selecting a power source? Let’s turn our brains off for a second and suppose this is the only thing that matters. The only variable that matters for selecting a power source is whether it kills birds.

The total evidence requirement demands that we also learn how many birds/Gwh (Gigawatt hour) our other energy alternatives kill. Knowing about a single source isn’t enough. We need to make comparisons. It turns out that wind power kills 5 times fewer birds than do fossil fuel power stations. So, by the fossil fuel industry’s own lights, wind power is better than fossil fuel power.

Of course, bird deaths/Gwh aren’t the only thing that matter. To meet the total evidence requirement we’d need to make a list of all the relevant metrics we’d need to compare in order to decide which source of power is best. We’d need to know the cost/Gwh, CO2 emission/Gwh, particulate emissions/Gwh, environmental costs/Gwh, health costs/Gwh, etc…

There’s a lot of stuff we need to know before we can make any reasonable judgment about which power source is preferable. In other words, merely knowing that windmills kill birds doesn’t remotely come close to meeting the total evidence requirement. But without applying the total evidence requirement, it’s very easy to be mislead.

For more on slanting by omission, the total evidence requirement, and some practice exercises, go to this module in my free online critical thinking course.



6 thoughts on “Selective Sharing

  1. Philosophami, another excellent essay on science and propaganda to go along with your other pieces about anti-science propaganda and the work of Thomas Kuhn. I’ve been studying science and science related philosophy and sociology since high school and have been a science junkie since grade school. I’ve also been a non-fiction science reviewer for the American Library Association’s Booklist magazine for over twenty years. Currently I’m working on a book tentatively titled: “The Dog That Ate My Science Project: How Dogma, Groupthink and Special Interests Are Corrupting the Search for Scientific Truth.” One of the main lines of argument I pursue in the book is how many so-called “consensus science” paradigms, particularly in medicine, might actually be considered examples of false consensus propped up by various industries, the media, and the psychological influence of groupthink and confirmation bias. The six examples of false or poorly supported consensus I’m going to use in the book are community water fluoridation, the HIV = AIDS hypothesis, the safety and effectiveness of vaccinations, the viability of GMOs, conventional cancer treatments such as chemotherapy and radiation therapies, and the Big Bang Theory.

    My perspective on all of these paradigms are informed by extensive reading in an academic discipline called “science studies.” I’m guessing you’re familiar with this area of study since your essays have a lot to say about the sociology and psychological contexts of various scientific theories and practices, particularly climate change and vaccination.

    By way of constructive criticism, one thing I want to point out regarding your essays is that you appear to be missing some greater historical context about how consensus science has been repeatedly overturned over the decades and centuries as science has evolved. This is particularly true in medicine where things like the h. pylori bacteria as a causative factor in ulcers was dismissed as nonsense for decades before finally being accepted by the mainstream. Other examples include the hygienic roots of puerperal fever and the dangers of leaded gasoline–all dismissed by the “consensus” (and vociferously denied by industry in the case of leaded gasoline for decades) until a critical mass of supporters turned the tide. Here’s an article that shows just a few examples of previously ridiculed ideas in medicine:

    In all of the above examples, groupthink, confirmation bias, and cognitive dissonance–as well as dogmatic thinking–played roles in dismissing and ridiculing the heretics who disagreed with the consensus viewpoint. Of course, this is not to say that most dissenters or heretics from mainstream scientific paradigms will eventually be proved right, since the vast majority fall by the wayside and are proven incorrect However, there are at least two fringes of science; the lunatic fringe and the frontier fringe. With any given new scientific proposal it’s not always clear in the beginning if it has any merit or not. This is why it’s somewhat wrong headed and insulting to use ad hominem attacks to label dissenters from the consensus as “denialists” or “anti-science.” You are indeed correct, in my humble opinion, that many so-called climate change “denialists” aren’t really well versed on the science of climate change and do resort to selective sharing of evidence in various social media outlets. But in the case of vaccine science that’s not always the case. There are hundreds, if not thousands, of well respected scientists, physicians and medical researchers that question the safety and effectiveness of vaccines based upon well considered scientific evidence (such as the fact that giving children and adolescents up to 80 vaccines containing a wide assortment of toxic adjuvants by the time they are 18 has never been proven safe, to give just one of many scientifically grounded examples). Dismissing these people as “anti-vaxxers,” simply because they have concerns (and most of them don’t even consider themselves “anti-vaccine” at all) is insulting, disrespectful and doesn’t represent good scientific practice. It’s a form of shaming to enforce adherence to the current consensus and–I’m sure this will sound like “conspiracy theorist language” (if so, so be it)–to protect the profits and reputation of the pharmaceutical companies that produce vaccines. As I’m sure you’re aware, the pharmaceutical industry pays millions of dollars in advertising to the corporate owned mainstream media every year to sell their products. Hence, this isn’t “conspiracy theory” logic, but simply business as usual.

    All this aside, I think you’re articles are mostly excellent and well reasoned, but you’re missing some critical information regarding the broader view of science, particularly the history of enforced consensus. A few good books to help you correct this deficiency include the works of Henry H. Bauer, a professor emeritus of science studies, particularly “Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth.” My own book will include a lot of information from this one, but will have a less academic, more readable approach to appeal to lay audiences as well as working scientists.

    Feel free to respond to or rebut anything I’ve said. Good luck in your further research and essay writing.


    1. Hi Carl,
      Thank you for taking the time to read my blog. I appreciate your feedback and your reading recommendations. I am familiar with a lot of the literature on the problems with science and especially how industry can corrupt science. For this reason I’m especially sympathetic to anti-vaxxers (I realize you don’t like that term–sorry!) and anti-GMO proponents. There’s a long and horrific history of industry manipulating science for profit and to the detriment of the public (perhaps the best book I know on this topic is Merchants of Doubt). I don’t blame them for being suspicious but at this point the independent evidence for the safety of vaccines and GMOs is overwhelming by any reasonable standard. I know you disagree and that’s fine. I’m not here to argue with anyone–I don’t have the time or energy. Rather, my goal is to teach people about the various tactics and rhetorical patterns that are used by anti-science campaigns. Anyhow, I wish you the best of luck with your book project. You’re brave to take on such a massive topic.


      1. Philosophami, thanks for your very considerate and respectful reply. Believe me, I understand your time constraints regarding your work and unwillingness to engage in fruitless debate. I’m glad you are sympathetic to the anti-GMO and “anti-vaxxer” positions; however, I prefer the terms “vaccine skeptic” or “vaccine dissident” since the vast majority of literature I’ve read by people in this camp is both pro-science and pro vaccine safety and the term “anti-vaxxer” is simply a propaganda tool used by the mainstream media and Big Pharma to shame vaccine skeptics into silence. Since you’ve obviously done a lot of research in science, I trust you are clear that having concerns about the safety of any form of drug or medicines like vaccines is not “anti-science,” especially if it involves a close critique of the evidence. For the record, I respectfully disagree with your assertion that “independent evidence for the safety of vaccines and GMOs is overwhelming by any reasonable standard,” and can’t help concluding you simply haven’t looked as closely at the evidence as I have. During my decades of study in various scientific disciplines I’ve learned that one of the hardest things to come to terms with is uncertainty and the willingness to remain doubtful and skeptical. Remember there is no such thing as “proof” in science; only more or less compelling forms of evidence. All truth in science is provisional, as I’m sure you’ve learned from your studies of Thomas Kuhn, among others. Anyway, good luck with your research going forward. Happy trails.


  2. That was well written, Ami! Very interesting. It makes me think about what I share via Social Media and on My Blog. Also. Lung Cancer is not just caused by Smoking Cigarettes. There are other variables that have to exist, in order for a Person to be diagnosed with Lung Cancer due to Smoking: 1. General Health 2. Diet 3. Activity 4. Existing Chronic Illnesses 5. Hereditary or not. Anyway. That was what I was told. It goes for all other Cancers, too! Not just Smoking. Though. It would be better to not start Smoking or to cut back as much as possible. There is a level in which One could become Addicted. ++


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s