Deepfake videos degrade political reputations even when viewers realize they are fake

The rapid advancement of generative artificial intelligence has introduced a sophisticated new weapon into the arsenal of political disinformation: the deepfake. These synthetic media creations, which use neural networks to convincingly replace a person’s likeness or clone their voice, have long been theorized as a catastrophic threat to democratic integrity. Now, a comprehensive study published in the journal Communication Research provides empirical evidence that these manipulated videos achieve their goal of damaging political reputations, even when the audience is consciously aware that the footage is fraudulent. Led by Michael Hameleers of the University of Amsterdam, the research team discovered a persistent "reputational tax" levied against politicians targeted by deepfakes, a phenomenon that standard fact-checking and media literacy efforts appear unable to fully mitigate.

The study arrives at a critical juncture in global politics, as artificial intelligence tools become more accessible to malicious actors. Unlike traditional "shallowfakes"—which involve simple editing or mislabeling—deepfakes utilize deep learning algorithms to create seamless, realistic depictions of events that never occurred. This technology allows for the creation of "evidence" that bypasses the traditional skepticism of the human brain, which is biologically predisposed to trust visual information. The research conducted by Hameleers and his colleagues, Toni G. L. A. van der Meer, Marina Tulin, and Tom Dobber, sought to determine if the human brain’s reliance on visual proof could be weaponized to override rational disbelief.

Theoretical Framework: Processing Fluency and the Power of Sight

The psychological foundation of the study rests on the concept of "processing fluency." This theory suggests that the ease with which a person processes information influences their perception of its truthfulness. Because high-quality video is easy for the brain to digest and visually stimulating, it creates a sense of familiarity and acceptance. This "mental shortcut" can lead viewers to absorb the emotional impact of a video even if their analytical mind flags the content as suspicious.

Furthermore, the researchers explored the "illusion of truth" effect, where repeated exposure to a claim makes it seem more credible over time. In the context of political deepfakes, the visual nature of the medium provides a vividness that text-based lies lack. Even when a viewer identifies a video as a fake, the "visual memory" of a politician behaving in an outrageous manner remains lodged in the subconscious, creating a lasting negative association with the individual’s brand.

Methodology: A Comparative, Three-Wave Experiment

To ensure the findings were not limited to a single cultural or political context, the research team conducted their experiment across two vastly different political landscapes: the United States and the Netherlands. The United States was chosen for its highly polarized two-party system and a history of vulnerability to right-wing disinformation. Conversely, the Netherlands provided a "resilient" counterpoint, characterized by a multiparty system, a culture of political consensus, and higher levels of public trust in the media.

The study involved more than 3,000 adult participants across both nations. Unlike many previous studies that relied on a single point of data, this research employed a "three-wave" design to track the evolution of voter sentiment over time:

  1. Wave One: Participants completed a baseline survey to establish their existing political leanings and their opinions of specific political figures.
  2. Wave Two: Two days later, participants were exposed to either genuine political footage or a manipulated deepfake. This wave also introduced "defensive" interventions, such as media literacy warnings or post-video fact-checks.
  3. Wave Three: Three days after exposure, a final survey was conducted to measure the long-term decay or persistence of the deepfake’s impact.

In the American arm of the study, the deepfake featured Representative Nancy Pelosi. The video used artificial audio to make her appear to voice support for the January 6th Capitol rioters, suggesting that Americans needed to "fight" to reclaim their country. In the Dutch arm, the researchers targeted Sybrand Buma, a moderate politician from the Christian Democratic Appeal (CDA). The manipulated footage showed Buma delivering an uncharacteristically extremist, anti-immigrant speech. Both videos were designed to represent "radical right-wing" rhetoric that directly contradicted the established public personas of the targets.

The Reputation-Credibility Gap: A Key Finding

The most significant finding of the study was the emergence of a "reputation-credibility gap." In both the U.S. and the Netherlands, participants were generally successful at identifying the videos as suspicious. Because the statements were so out of character for Pelosi and Buma, viewers rated the credibility of the clips as significantly lower than the control group who watched genuine footage.

However, this recognition of falsehood did not protect the politicians from harm. Despite the low credibility ratings, the reputations of both politicians suffered a measurable decline immediately following exposure to the deepfakes. This suggests that the emotional weight of seeing a leader say something offensive is "sticky," regardless of whether the viewer believes the event actually happened. The visual "gut reaction" appears to be decoupled from the intellectual "truth-check."

This effect was particularly pronounced among the politicians’ own supporters. When a voter saw a politician they liked voicing extreme or contradictory views, it triggered a sharp negative reaction. While participants who already disliked the politicians didn’t change their views much (as their opinions were already at a "floor"), the deepfakes successfully "de-legitimized" the candidates in the eyes of their base and moderate observers.

The Failure of Fact-Checking and Media Literacy

The study also tested the efficacy of the two primary weapons currently used to combat disinformation: media literacy education and fact-checking.

Some participants were given a "pre-bunking" warning—a set of instructions on how to spot digital manipulation—before watching the video. The researchers found that these media literacy warnings had almost no measurable impact on the outcome. They did not prevent the reputational damage or significantly increase the participants’ ability to detect the fake beyond what their natural skepticism already provided.

Fact-checking, administered immediately after the video, produced a paradoxical result. While the fact-checks successfully convinced participants that the video was indeed fake (further lowering the "credibility" score), they were completely ineffective at reversing the damage to the politician’s reputation. The negative sentiment generated by the visual experience remained intact even after the "facts" were corrected. This aligns with the "Sleeper Effect" in psychology, where the memory of a message persists long after the memory of its source (or its debunking) has faded.

Comparative Resilience: The US vs. The Netherlands

One of the more surprising outcomes was the consistency of the results across both countries. Despite the structural differences in their media and political environments, the psychological reaction to deepfakes was nearly identical.

In the United States, the polarization did not make citizens more or less susceptible to the reputational damage than in the Netherlands. This suggests that the vulnerability to synthetic media is a human cognitive trait rather than a byproduct of a specific political culture. However, the study did note that the U.S. participants were slightly more affected by repeated exposure to the deepfake, which deepened the reputational harm, though it did not make the claims more believable.

Timeline of Decay: The Temporary Nature of Isolated Deception

A silver lining in the research was the observation that the effects of a single deepfake exposure are largely temporary in an experimental setting. By the third wave of the study—three days after the exposure—the negative feelings toward Pelosi and Buma had mostly returned to their baseline levels.

However, the researchers cautioned that this "recovery" might not occur in a real-world election cycle. In a live campaign, a voter would not be exposed to a single video in isolation; they would be bombarded with a constant stream of reinforcing misinformation, social media commentary, and partisan spin. The cumulative effect of multiple, different deepfakes over months could prevent the "natural recovery" observed in the week-long study.

Implications for Modern Democracy and Future Policy

The findings of Hameleers and his team suggest that the mere existence of deepfake technology creates a "lose-lose" scenario for political actors. If a politician is targeted, the damage is done the moment the video is viewed, and no amount of professional fact-checking can fully restore their standing.

Furthermore, the study hints at the "Liar’s Dividend"—a secondary effect where the prevalence of deepfakes allows politicians to dismiss genuine, incriminating footage as "fake news." As the public becomes more aware of AI’s capabilities, the baseline of trust in all visual media begins to erode, potentially leading to a "post-truth" environment where no evidence is considered definitive.

The research team emphasized that as AI generation tools continue to evolve, the "glitches" and unnatural audio used in their 2021 experiment will disappear. Future deepfakes will be even more "fluent" and harder to detect, likely increasing their psychological impact.

Conclusion and Recommendations

The study concludes that radical right-wing political deepfakes are an effective tool for de-legitimizing political actors, transcending national borders and political systems. The researchers recommend that future studies be conducted during active political campaigns to capture the "noise" of a real election environment.

For policymakers and tech platforms, the study serves as a warning that current "correction" strategies are insufficient. If the emotional damage of a deepfake cannot be undone by a fact-check, the focus may need to shift toward "authentication" technologies—such as digital watermarking and blockchain-verified media—that prevent the spread of synthetic media before it reaches a mass audience.

The study, "Radical Right-Wing Political Deepfakes Can Successfully Delegitimize Targeted Political Actors: Evidence From Three-wave Experiments in the US and The Netherlands," underscores a sobering reality: in the age of artificial intelligence, seeing is no longer believing, but seeing is still feeling—and in politics, feelings often dictate the vote.

Related Posts

Class, genes, and rationality: A gene-environment interaction approach to ideology

The longstanding debate over whether political identity is forged in the fires of social experience or encoded within the biological blueprint of the individual has reached a new milestone. Recent…

Social-Media-Based Mental Health Interventions: Meta-Analysis of Randomized Controlled Trials

The global mental health landscape is currently facing a period of unprecedented strain, with data from the World Health Organization suggesting that more than 1 in 8 individuals—nearly one billion…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

Navigating the Labyrinth: Independent Fashion Designers Confront Tariffs, Supply Chain Volatility, and the Operational Imperatives for Growth

Navigating the Labyrinth: Independent Fashion Designers Confront Tariffs, Supply Chain Volatility, and the Operational Imperatives for Growth

Erupcja and the Cinematic Renaissance of Warsaw A Comprehensive Guide to the Film Locations and Cultural Pulse of Polands Capital

Erupcja and the Cinematic Renaissance of Warsaw A Comprehensive Guide to the Film Locations and Cultural Pulse of Polands Capital

UC Davis Researchers Develop Novel Light-Driven Technique to Synthesize Psychedelic-Like Compounds Without Hallucinations

UC Davis Researchers Develop Novel Light-Driven Technique to Synthesize Psychedelic-Like Compounds Without Hallucinations

Celebrating Spring’s Bounty: The Enduring Appeal of Broad Beans and Seasonal Orzo Preparations

Celebrating Spring’s Bounty: The Enduring Appeal of Broad Beans and Seasonal Orzo Preparations

Inaugural Asian American Pacific Islander Design Alliance Gala Celebrates Cultural Heritage and Professional Excellence in Los Angeles

Inaugural Asian American Pacific Islander Design Alliance Gala Celebrates Cultural Heritage and Professional Excellence in Los Angeles

Team Melli Embarks on World Cup Journey Amidst Diplomatic Hurdles and Enthusiastic Send-off

Team Melli Embarks on World Cup Journey Amidst Diplomatic Hurdles and Enthusiastic Send-off