Feeling angry makes people more likely to share news from low-credibility sources

The Psychological Landscape of the Digital Age

The rise of social media has fundamentally altered the mechanics of human communication, shifting the focus from the accuracy of information to the speed of its delivery and the intensity of its emotional resonance. Digital platforms are increasingly recognized not just as neutral conduits for data, but as environments optimized for engagement, often at the expense of veracity. Previous academic inquiries have frequently cited "moral outrage" as a significant factor in the spread of misinformation, yet this term has historically been treated as a monolithic emotional state. The research conducted by Peng’s team at the Emotion and Communication Neuroscience Lab seeks to dismantle this broad categorization, arguing that the specific nuances of an emotion dictate the subsequent behavioral response.

Moral outrage is a complex construct, typically composed of two distinct emotional threads: anger and disgust. While both are reactions to perceived moral violations, they trigger vastly different evolutionary and psychological responses. Anger is characterized by an "approach" orientation; it motivates individuals to confront, punish, or rectify a perceived wrong. Disgust, conversely, is an "avoidance" emotion, prompting individuals to distance themselves from a source of contamination or moral decay. The Shenzhen University study hypothesized that this distinction is critical to understanding social media behavior, as the act of "sharing" is inherently an approach-oriented behavior aimed at public condemnation.

Chronology and Methodology of the Research

The investigation was structured into three distinct experimental phases, each designed to peel back a layer of the cognitive process involved in news consumption and sharing. This multi-staged approach allowed the researchers to move from observing general correlations to identifying specific mathematical changes in how the brain processes evidence.

Phase One: The Conflict Between Credibility and Content
The first experiment involved 223 participants recruited from an online platform in China. The researchers presented these individuals with 24 modified news headlines that represented false information. These headlines were meticulously calibrated to vary in two dimensions: the severity of the moral transgression described (ranging from neutral to severe) and the assigned credibility of the source (ranging from 0% to 100%).

During this phase, participants were directed to focus their cognitive resources on specific attributes—either the accuracy of the news, the morality of the event, or a control condition with no specific focus. The results established a baseline for how "mental shortcuts" operate. While participants generally preferred sharing news from highly credible sources, a significant shift occurred when the content involved severe moral violations. When prompted to focus on the moral weight of a story, participants’ reliance on the source’s credibility label diminished significantly. This suggests that the emotional gravity of a "wrong" can effectively blind a user to the reliability of the person or outlet reporting it.

Phase Two: Differentiating Anger from Disgust
The second experiment, involving 116 university students, sought to isolate the specific driver of sharing. This phase focused on comparing the effects of moral anger against moral disgust. Participants were presented with 18 false headlines paired with either high or low credibility labels. Before deciding whether to share, students were prompted to rate their current levels of anger or disgust.

The findings were definitive: participants in the anger-prompted group were significantly more likely to share headlines from low-credibility sources than those in the disgust or control groups. Interestingly, moral disgust did not increase the willingness to share. This confirmed the researchers’ theory that because anger is an action-oriented emotion, it translates directly into the digital "action" of sharing, whereas disgust may actually suppress the desire to interact with the content further.

Phase Three: Mathematical Modeling of the Decision Threshold
The final phase of the study utilized 63 university students and employed Hierarchical Drift-Diffusion Modeling (HDDM). This mathematical framework allows scientists to look beyond the final decision and examine the underlying cognitive dynamics, such as the "decision threshold"—the amount of mental evidence a person requires before committing to an action.

To ensure a high-intensity emotional state, participants were asked to engage in a memory recall task, writing about a personal experience that triggered intense anger. They were then asked to evaluate 36 true and false headlines paired with varying credibility labels. The model revealed that anger did not necessarily impair a person’s ability to distinguish truth from falsehood. Instead, it drastically lowered their decision threshold. Under the influence of anger, participants required less time and less evidence to hit the "share" button. The emotion essentially created a state of cognitive impulsivity, where the barrier to action was significantly reduced.

Data Analysis and Quantitative Findings

The data gathered across these three experiments underscores a consistent pattern: moral anger serves as a catalyst for rapid, uncritical communication. In the third experiment specifically, the HDDM results showed that while the "drift rate" (the speed at which a person accumulates information) remained relatively stable, the "boundary separation" (the threshold for making a decision) was what shifted.

This is a crucial distinction for misinformation research. It suggests that people sharing false news while angry aren’t necessarily "fooled" more easily in a traditional sense; rather, they become less cautious. The urgency to express condemnation through sharing overrides the analytical need to verify the source. According to the study, this effect was uniform across both true and false headlines, but its impact is most damaging in the context of misinformation, where a higher threshold of skepticism is required to prevent the spread of falsehoods.

Institutional and Academic Reactions

The findings have resonated within the academic community, particularly among those studying the "Attention Economy." Scholars in the field of communication neuroscience have noted that these results align with the Elaboration Likelihood Model (ELM), which suggests that when individuals are under high emotional stress, they abandon "central route" processing (logical analysis) in favor of "peripheral route" processing (emotional cues and shortcuts).

Xiaozhe Peng emphasized that the study’s results highlight a fundamental misunderstanding of the misinformation problem. "Misinformation is not only a problem of false belief; it is also a problem of emotionally charged communication," Peng observed. He noted that the "action-oriented" nature of moral anger makes it a "particularly potent driver," explaining why misleading content that targets social or moral grievances often goes viral within minutes of being posted.

While the study was conducted within a Chinese cultural context, its implications are being considered by global tech policy experts. The researchers themselves acknowledged that while the controlled experimental setting allowed for precise measurement of cognitive mechanisms, real-world social media environments—which include social validation (likes), algorithmic reinforcement, and echo chambers—are even more complex and likely exacerbate the effects of moral anger.

Broader Implications for Social Media Policy

The Shenzhen University study provides a scientific basis for new types of interventions on social media platforms. Currently, most "anti-misinformation" efforts focus on fact-checking or labeling content as "false." However, Peng’s research suggests that by the time a user is angry enough to share, they are already ignoring credibility labels.

Potential implications for platform design include:

  1. Emotional Friction: Implementing "lightweight prompts" that detect high-arousal language and ask users to pause. If the system detects a user is interacting with content designed to trigger moral outrage, it could introduce a mandatory delay or a "cooling off" prompt before the share button becomes active.
  2. Redefining Engagement Algorithms: Current algorithms often prioritize content that generates high levels of "outrage" because it leads to more clicks and shares. This study provides evidence that such "engagement" is often impulsive and detrimental to the information ecosystem.
  3. Digital Literacy Education: Shifting the focus of digital literacy from "how to spot a fake" to "how to manage your emotions online." Teaching users to recognize the physical and psychological signs of moral anger could be more effective than teaching them to analyze source metadata.

Practical Guidance for the Public

In light of these findings, the researchers offer a simple yet profound piece of advice for everyday internet users. The study proves that the feeling of anger is a physiological signal that your cognitive defenses are being lowered.

"For everyday users, a practical takeaway is simple: if a post makes you instantly angry, that is exactly the moment to pause before liking, commenting, or sharing," Peng advised. This "strategic pause" allows the decision threshold to return to a normal level, giving the brain’s analytical centers time to catch up with the emotional centers.

As digital communication continues to evolve, understanding the intersection of emotion and technology remains paramount. The research by Haoyang Jiang, Hongbo Yu, Shenyuan Guo, and Xiaozhe Peng serves as a critical reminder that in the age of instant information, our oldest biological impulses—specifically moral anger—can be the very things that lead us to spread the most modern of deceptions. By identifying the mathematical and psychological mechanisms behind this phenomenon, the study opens the door to a more nuanced and effective approach to safeguarding the integrity of the global information commons.

Related Posts

Americans Overestimate How Many Social Media Users Post Harmful Content

The perception of social media as a digital landscape dominated by hostility and misinformation is a widely held sentiment among the American public. However, a comprehensive set of three studies…

Class, genes, and rationality: A gene-environment interaction approach to ideology

The longstanding debate over whether political identity is forged in the fires of social experience or encoded within the biological blueprint of the individual has reached a new milestone. Recent…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

The Acne Care Revolution: How Influencers and New Brands Are Reshaping a Stagnant Market

The Acne Care Revolution: How Influencers and New Brands Are Reshaping a Stagnant Market

Mauritius Unveils Exclusive Golden Visa Program Targeting High-Net-Worth Investors in Tech and Innovation

Mauritius Unveils Exclusive Golden Visa Program Targeting High-Net-Worth Investors in Tech and Innovation

Natural Speech Analysis Can Reveal Individual Differences in Executive Function Across the Adult Lifespan

Natural Speech Analysis Can Reveal Individual Differences in Executive Function Across the Adult Lifespan

From Hollywood to Royalty The Architectural and Cultural Legacy of Princess Grace of Monaco

From Hollywood to Royalty The Architectural and Cultural Legacy of Princess Grace of Monaco

All of a Sudden

All of a Sudden

Legal Technology Sector Sees Unprecedented AI-Driven Growth as Clio Surpasses Half-Billion in Annual Recurring Revenue

Legal Technology Sector Sees Unprecedented AI-Driven Growth as Clio Surpasses Half-Billion in Annual Recurring Revenue