The perception of social media as a digital landscape dominated by hostility and misinformation is a widely held sentiment among the American public. However, a comprehensive set of three studies recently published in the journal PNAS Nexus suggests that this perception is significantly detached from reality. While the average American believes that nearly half of social media users are engaged in spreading false news or posting toxic comments, empirical data reveals that such behavior is actually confined to a remarkably small and prolific minority. This gap between perception and reality, researchers argue, has profound implications for social trust and the perceived moral health of the nation.
The research, led by Angela Y. Lee and her colleagues at Stanford University, investigated the discrepancy between the perceived and actual prevalence of harmful online behaviors. The studies found that Americans estimate roughly 43% of Reddit users post severely toxic comments and 47% of Facebook users share false news. In stark contrast, platform-level data indicates that these behaviors are generated by only 3% to 8.5% of users. This "vocal minority" effect creates an illusion of widespread dysfunction that colors how citizens view one another in the physical world.
The Anatomy of Online Harm: Toxicity and Misinformation
To understand the scope of the study, it is necessary to define the two primary categories of harmful behavior examined: toxic comments and false news. Toxic comments are defined as those that are insulting, hateful, aggressive, or intended to discourage reasonable discussion. Such content often targets individuals or groups based on identity, creating a hostile environment that can lead to "chilling effects," where moderate voices withdraw from public discourse to avoid abuse.
False news, on the other hand, involves the dissemination of misleading or entirely fabricated information. The speed at which false news spreads is often attributed to its sensationalist nature, which triggers emotional responses like fear or anger. This emotional engagement encourages users to share content before verifying its accuracy. Together, these two behaviors—toxicity and misinformation—are blamed for damaging reputations, deepening political polarization, and fostering a sense of societal instability.
The Prolific Minority: Why Perception Fails
The researchers highlight a recurring pattern across various social media platforms: a tiny fraction of users is responsible for the vast majority of problematic content. This phenomenon is not unique to the platforms studied in the PNAS Nexus paper. For instance, previous research cited in the study noted that a mere 1% of conflict-seeking Reddit communities are responsible for 74% of all conflict-related content across the entire site. Similarly, on Twitter (now X), studies have shown that approximately 60% of hateful speech originates from a very small, highly active community of users.
This concentration of activity creates what psychologists call the "availability heuristic." Because toxic and false content is often the most dramatic and highly visible, it is more easily remembered. When users encounter a high volume of such content, they naturally assume it represents a large portion of the user base, rather than the tireless output of a few thousand individuals. This leads to a skewed mental model of the "average" internet user.
Methodology: Quantifying the Misperception
The research team conducted three distinct surveys involving a total of 1,090 U.S. adults. To ensure the findings were representative of the broader population, the participants were recruited via CloudResearch Connect and matched to national quotas based on age, gender, race, and ethnicity.
In the first study, participants were asked to estimate the percentage of Reddit and Facebook users who engage in harmful behaviors after being briefed on the definitions of toxicity and false news. The results were consistent: participants drastically overestimated the number of "bad actors." The average estimate for toxic Reddit users was 43%, and for Facebook users sharing false news, it was 47%. In reality, the actual figures—derived from platform-wide data and previous academic audits—sit between 3% and 8.5%.
The second study moved from abstract estimates to practical identification. Participants were introduced to a Google-developed system (likely the Perspective API) used to detect toxic language. They were then shown 20 actual comments from Reddit, half of which were classified as severely toxic and half as neutral. Participants were tasked with predicting how the automated system would classify them. This stage confirmed that users not only overestimate the quantity of toxic users but also struggle to accurately calibrate what constitutes "average" behavior on these platforms.
The third study was experimental, designed to see if correcting these misperceptions could alter a person’s outlook on society. Participants were divided into two groups. The first group (the correction condition) read a text explaining that the vast majority of people never share toxic content or false news. The second group (the control condition) read a neutral text about the history of Reddit’s founding.
The Psychological Toll: Moral Decline and Cynicism
One of the most significant findings of the research was the link between the overestimation of harmful content and the perception of "moral decline." When people believe that half of their fellow citizens are behaving poorly online, they tend to view society as a whole as being in a state of ethical decay.
The experiment in Study 3 revealed that participants who received the "misperception correction" (the truth about the small number of toxic users) reported a significantly lower sense of societal moral decline compared to the control group. They felt more positive about their fellow Americans and were more likely to believe that most people, like themselves, do not desire or approve of harmful online content.
However, the study also found that certain attitudes were more resistant to change. Correcting the statistics did not significantly reduce general cynicism or increase generalized trust in human nature. This suggests that while specific perceptions of societal behavior can be updated with facts, deeper psychological traits like cynicism may be more entrenched and influenced by factors beyond social media statistics.
Chronology of the Research and Context
The publication of this paper comes at a time of heightened scrutiny for social media companies. Over the past decade, platforms like Facebook and Reddit have faced intense criticism from lawmakers and the public for their perceived roles in eroding democratic norms.
- 2016–2020: High-profile incidents involving foreign interference and the viral spread of misinformation during elections solidified the public perception that social media is a "wild west" of falsehoods.
- 2021–2023: Increasing academic focus on "super-sharers" and "trolls" began to reveal that a small number of accounts drive the majority of engagement.
- 2024: The PNAS Nexus study provides a crucial missing link by showing that the public has not yet caught up to these academic findings, remaining stuck in a perception that "everyone" is part of the problem.
Analysis of Implications for Platforms and Policy
The implications of this research are twofold. First, for social media platforms, the study suggests that their "visibility problem" is also a "perception problem." Even if platforms successfully ban or shadow-ban a large portion of toxic content, the psychological damage of previous exposure lingers. Platforms may need to do more than just moderate content; they may need to actively communicate the rarity of such behaviors to their users to repair the social fabric.
Second, for policymakers, the study highlights the danger of "affective polarization"—the phenomenon where citizens do not just disagree with the opposing side but actively dislike and distrust them. If Americans believe that half of the "other side" is posting hate speech or lies, the possibility for civil discourse vanishes. By highlighting that the vast majority of people are moderate and well-behaved, there is a potential to lower the temperature of national politics.
Limitations and Future Directions
The authors of the study are careful to note its limitations. The research focused exclusively on U.S. participants and two specific platforms (Reddit and Facebook). Cultural differences in how toxicity is perceived or how information is shared mean that these findings might not be identical in other countries. Furthermore, the study focused on "severely toxic" content and "false news," leaving out other forms of problematic behavior like "rage-baiting" or subtle bias, which might be more prevalent.
Despite these limitations, the paper provides a vital correction to the prevailing narrative of digital dystopia. It suggests that while the internet has its share of "bad actors," they are a tiny, albeit loud, minority. The "silent majority" of social media users are not the ones shouting insults or spreading conspiracies; they are simply people observing a noisy room and mistakenly believing that everyone in it is screaming.
The study, titled "Americans overestimate how many social media users post harmful content," was authored by a team of experts including Angela Y. Lee, Eric Neumann, Jamil Zaki, and Jeffrey Hancock. Their work serves as a reminder that in the age of digital information, our perceptions of our neighbors are often filtered through algorithms that prioritize the extreme over the average, and the few over the many. Understanding this discrepancy may be the first step toward restoring a sense of common decency in the American public square.







