Prediction of self-efficacy in recognizing deepfakes based on personality traits

A landmark study published in the peer-reviewed journal F1000Research has uncovered a significant link between fundamental personality traits and an individual’s confidence in their ability to detect AI-generated deepfake videos. Led by Juneman Abraham, a professor and Vice Rector of Research and Technology Transfer at BINUS University in Indonesia, the research suggests that psychological makeup—specifically the traits of honesty and agreeableness—serves as a primary driver of "self-efficacy" in the face of increasingly sophisticated digital deception. As artificial intelligence continues to blur the lines between reality and fabrication, these findings provide a critical baseline for understanding human vulnerability and resilience in the digital age.

The study arrives at a pivotal moment in the evolution of synthetic media. Deepfake technology, which utilizes deep learning algorithms and Generative Adversarial Networks (GANs), has progressed from a niche experimental tool to a widespread phenomenon capable of producing hyper-realistic audio and video. These systems analyze vast datasets of a target individual’s likeness to generate content where they appear to say or do things that never occurred. While the technology has applications in entertainment and education, its misuse in spreading misinformation, committing financial fraud, and violating personal privacy has sparked a global debate on digital safety and the limits of human perception.

The Psychological Framework: Defining Self-Efficacy and the HEXACO Model

At the heart of the research is the concept of self-efficacy—a psychological term referring to an individual’s belief in their capacity to execute behaviors necessary to produce specific performance attainments. In the context of this study, self-efficacy represents how confident a person feels in their ability to distinguish a legitimate video from a deepfake. Past psychological literature suggests that self-efficacy is not merely a reflection of technical skill but is deeply rooted in an individual’s stable personality characteristics.

To map these traits, the researchers utilized the HEXACO model of personality. This framework is widely regarded in the scientific community for its comprehensive categorization of human temperament across six dimensions:

  1. Honesty-Humility: Sincerity, fairness, greed avoidance, and modesty.
  2. Emotionality: Fearfulness, anxiety, dependence, and sentimentality.
  3. Extraversion: Social self-esteem, social boldness, sociability, and liveliness.
  4. Agreeableness: Forgivingness, gentleness, flexibility, and patience.
  5. Conscientiousness: Organization, diligence, perfectionism, and prudence.
  6. Openness to Experience: Aesthetic appreciation, inquisitiveness, creativity, and unconventionality.

By applying this model to a sample of 200 young adults in Indonesia, the research team sought to identify which of these traits correlate with a high or low sense of security when navigating a digital landscape saturated with AI-generated content.

Methodology and Participant Profile

The study focused on a demographic of 200 participants, aged 18 to 25, with a mean age of approximately 22 years. This group was comprised of 139 women and 61 men. The selection of young adults was intentional; as "digital natives," this cohort spends a disproportionate amount of time on social media platforms where deepfakes are most likely to circulate. Their interactions with technology are frequent and varied, making them a primary frontline for both the consumption and potential detection of manipulated media.

Participants were administered a two-part assessment. First, they completed a standardized 60-item HEXACO Personality Inventory to establish their psychological profiles. Second, they responded to a custom-designed questionnaire measuring their self-efficacy in recognizing manipulated media. This second assessment required participants to rate their confidence in identifying specific technical "telltales" of deepfakes, such as:

  • Abnormal blinking patterns or eye movements.
  • Mismatched skin tones or unnatural lighting on the face.
  • Jerky or "glitchy" movements during transitions.
  • Inconsistencies between facial expressions and the emotional tone of the speech.
  • Blurred edges around the mouth or hairline.

Key Findings: The Paradox of Honesty and Agreeableness

The statistical analysis yielded surprising results, revealing that only two of the six HEXACO traits were significant predictors of deepfake detection self-efficacy. Notably, these two traits—Honesty-Humility and Agreeableness—showed diametrically opposite relationships with participant confidence.

The Vulnerability of the Honest

Individuals who scored high in Honesty-Humility reported significantly lower confidence in their ability to detect deepfakes. This trait characterizes individuals who are generally pro-social, avoid manipulating others for personal gain, and adhere to ethical standards. The researchers hypothesize that because these individuals value integrity and find deception personally distasteful, they may perceive the highly manipulative nature of deepfake technology as an overwhelming and alien threat. Their inherent "honesty" may lead to a heightened awareness of the difficulty of the task, resulting in a more cautious or even skeptical view of their own detection abilities.

The Confidence of the Agreeable

Conversely, high levels of Agreeableness were associated with greater confidence in spotting AI manipulations. In the HEXACO model, Agreeableness involves a tendency toward cooperation, trust, and conflict resolution. The research team suggests that agreeable individuals may place higher trust in collective intelligence and the availability of communal tools—such as fact-checking websites or community-driven reporting systems. This "cooperative mindset" likely boosts their self-efficacy, as they view themselves as part of a larger, resilient network capable of identifying falsehoods.

The Irrelevance of Gender and Conventional "Success" Traits

One of the more striking aspects of the data was the lack of correlation between self-efficacy and the other four traits: Emotionality, Extraversion, Conscientiousness, and Openness to Experience. Traditionally, traits like Conscientiousness (associated with diligence) and Openness (associated with intelligence and curiosity) are predictors of success in various fields. However, in the realm of deepfake detection, these traits provided no significant advantage in how confident participants felt. Furthermore, the study found no statistically significant difference between men and women, suggesting that gender does not play a role in perceived digital media resilience.

Technical Context: The Evolution of Deepfake Detection

To understand the implications of this study, it is necessary to examine the current state of deepfake technology. Since the term was coined in 2017, the complexity of deepfakes has increased exponentially. Early deepfakes were often identifiable by "low-level" artifacts—such as a lack of blinking or blurred backgrounds. However, modern iterations utilize "high-level" semantic consistency, making it nearly impossible for the human eye to detect them without the aid of forensic software.

According to data from Sensity AI, the number of deepfake videos online has been doubling every six months. While 90% of deepfakes were initially related to non-consensual adult content, there has been a massive surge in "shallowfakes" and deepfakes used for political disinformation and corporate fraud. In this high-stakes environment, the gap between a person’s perceived ability to spot a fake and their actual ability is a major concern for cybersecurity experts.

Analysis of Implications: The Dunning-Kruger Risk

A critical caveat highlighted by Professor Abraham and his colleagues is that the study measured subjective confidence, not objective accuracy. This distinction is vital because of the Dunning-Kruger effect—a cognitive bias where people with limited competence in a particular domain overestimate their abilities.

"There is a profound danger in a false sense of security provided by technology or personality," Abraham noted. If an "agreeable" person feels highly confident but lacks the technical training to actually identify a sophisticated GAN-generated video, they may be more likely to share misinformation, believing they have already "vetted" it. This suggests that the very traits that foster social cohesion—trust and cooperation—are being weaponized by AI systems designed to exploit human psychology.

Regional Importance and the Global South

The study’s focus on Indonesia provides a rare and necessary perspective from the Global South. Much of the existing research on AI and psychology is conducted in Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. However, Abraham argues that non-Western contexts, where communal trust is a "vital social currency," face unique challenges.

In many Southeast Asian nations, social media platforms like WhatsApp and Facebook serve as primary news sources. In these environments, the "wisdom of the crowd" is a powerful force. If personality traits like Agreeableness lead to overconfidence in these regions, the systemic impact of a single viral deepfake could be devastating to social stability. Abraham advocates for "Digital Psychoethics," a framework that seeks to protect human agency against "algorithmic deception" by acknowledging these cultural and psychological nuances.

Chronology of the Research and Future Directions

The publication of this study in 2023 follows a multi-year effort by the research team to investigate the intersection of human integrity and technology.

  • 2020-2021: Initial conceptualization of "Psychoinformatics" and the study of digital corruption.
  • 2022: Data collection among Indonesian young adults and the development of custom self-efficacy metrics.
  • 2023: Publication in F1000Research, followed by peer review and subsequent updates to the findings.
  • Future Goals: The researchers intend to move from measuring confidence to measuring competence. This will involve "ground-truth" testing, where participants are shown actual deepfakes and their detection rates are compared against their HEXACO profiles.

Conclusion: Digital Literacy as Social Resilience

The findings of this study suggest that the battle against deepfakes cannot be won through technical software alone. Because our underlying personality traits dictate how vulnerable or confident we feel, digital literacy programs must be tailored to address these psychological realities.

Professor Abraham’s research highlights a "systemic asymmetry" between the creators of AI and the general public. While an individual’s honesty or willingness to cooperate are virtues in a physical community, they can become vulnerabilities in a digital "reality tunnel" where truth is manipulated. Moving forward, the goal of media literacy should not just be to teach people how to spot a "glitchy" video, but to build a form of "social resilience" that accounts for human psychological tendencies. As AI continues to hijack the narrative of reality, understanding the "human factor" remains the most critical component of our digital defense.

Related Posts

Anxious Aspirations: Attachment Anxiety Fuels Status Strivings Through Intrasexual Competition

A comprehensive international research effort involving six separate studies has identified a profound psychological link between individual relationship insecurities and the pursuit of high-status material goods. The research, conducted across…

Genital and Subjective Sexual Arousal in Androphilic Women and Gynephilic Men in Response to the Copulatory Movements of Different Animal Species

A comprehensive investigation into the physiological and psychological triggers of human sexual response has revealed that the rhythmic movements associated with animal copulation do not elicit sexual arousal in heterosexual…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

Gen Z is Transforming Beauty into a Social Sport: A New Era of In-Store Discovery Fueled by AI and Ingredient Obsession

Gen Z is Transforming Beauty into a Social Sport: A New Era of In-Store Discovery Fueled by AI and Ingredient Obsession

What to Wear on a Cruise: A Comprehensive Guide to Packing for Modern Sea Travel

What to Wear on a Cruise: A Comprehensive Guide to Packing for Modern Sea Travel

Scientists just watched Alzheimer’s damage happen in real time

Scientists just watched Alzheimer’s damage happen in real time

Actor Patina Miller Brings Major Drama to Her Manhattan Town House

Actor Patina Miller Brings Major Drama to Her Manhattan Town House

The last operational bridge over the Litani River in southern Lebanon has been destroyed by an Israeli strike, severing vital transport links.

The last operational bridge over the Litani River in southern Lebanon has been destroyed by an Israeli strike, severing vital transport links.

Roku Surpasses 100 Million Streaming Households Worldwide, Cementing Dominance in the Evolving Television Landscape

Roku Surpasses 100 Million Streaming Households Worldwide, Cementing Dominance in the Evolving Television Landscape