The Health Cost of Deepfakes in a Misinformation Age

Health misinformation did not begin with artificial intelligence. Long before deepfakes became widely accessible, false and misleading health claims were already shaping public opinion about vaccines, treatments, diseases, and prevention. Research has shown that health misinformation is widespread across social media and is associated with weaker public-health responses and reduced willingness to vaccinate (Borges do Nascimento et al., 2022; Suarez-Lledo & Alvarez-Galvez, 2021).

Deepfakes enter this already fragile environment, but they intensify the problem in a new way. They do not just spread falsehood. They make falsehood feel more real. A misleading post written in plain text may still raise suspicion. A synthetic clip that appears to show a doctor speaking calmly from a clinic can feel immediate, personal, and authoritative. That shift matters because people do not judge health information on facts alone. They also respond to tone, confidence, visual cues, and perceived expertise.

Why Deepfakes Hit Differently

A deepfake is an AI-generated or AI-manipulated video, image, or audio clip designed to appear authentic. In health communication, that is especially dangerous because trust often depends on signals that are easy to imitate: a familiar face, a reassuring voice, a white coat, a formal setting, or the performance of expertise.

Research suggests that these cues are powerful. In one experiment, disinformation messages that included deepfake video were rated as more vivid, more persuasive, more credible, and more shareable than similar messages without manipulated video (Hwang et al., 2021). Another study found that deepfakes often produce uncertainty even when they do not fully deceive viewers, and that uncertainty can reduce trust in news on social media (Vaccari & Chadwick, 2020).

That point is important. Deepfakes do not need to persuade everyone that a fake cure works or that a false warning is real. In health, it can be enough to make people hesitate. A person who is no longer sure which message to trust may delay care, ignore a recommendation, or disengage from official guidance altogether.

The Detection Problem

One reason deepfakes matter so much is that people are not especially good at spotting them. In a 2023 study published in Scientific Reports, Doss et al. found that roughly 27% to 50% of participants across several groups could not distinguish authentic videos from deepfakes in a science-information setting. That is not a small niche problem. It suggests a broad vulnerability in how people interpret visual evidence online.

This matters because many people now encounter health information first through phones, social feeds, short videos, and forwarded clips rather than through direct conversation with a clinician. When manipulated media moves faster than verification, public-health communication starts from a disadvantage. The burden shifts from simply explaining health information to first defending whether the source is even real.

When False Content Becomes a Health Cost

The health cost of deepfakes is not limited to one dramatic incident. It builds through ordinary decisions. False or manipulated health content can lead people to doubt treatment advice, delay testing, avoid vaccination, share misleading claims, or place confidence in unproven remedies.

Research on susceptibility to health misinformation shows that vulnerability is shaped by factors such as subject knowledge, literacy and numeracy, analytical thinking, and trust in science (Nan et al., 2022). That means susceptibility is not simply a matter of carelessness. It is often tied to unequal access to information, unequal confidence in institutions, and unequal ability to evaluate claims under stress.

This is why deepfakes should be understood as a layered public-health risk. At the individual level, they can distort judgment. At the community level, they can amplify rumor, fear, and confusion. At the institutional level, they can weaken the credibility of health systems that depend on public trust. The harm is not only that falsehood spreads, but that truthful guidance becomes harder to recognize and harder to accept.

Real Lives, Real Vulnerabilities

The danger becomes more concrete when we look at lived experience documented in research. In a qualitative study of COVID-19 vaccine decision-making among 60 Black women in Minneapolis–Saint Paul, Mohammed et al. (2023) found that women who delayed vaccination often described doubts about safety and exposure to targeted misinformation, including recurring myths about reproductive health. The study also found that historical trauma from unethical biomedical research and experiences of racism shaped how vaccine information was received.

This matters because misinformation does not land in an emotional vacuum. It interacts with memory, social experience, and preexisting mistrust. Deepfakes would not create those histories, but they could exploit them. Synthetic misinformation becomes more powerful in communities where trust is already strained and where people have reason to question whether institutions are speaking to them honestly.

Another useful example comes from a qualitative study in Sweden on contested claims about side effects of the copper IUD. Wemrell and Gunnarsson (2023) found tensions between evidence-based medicine and patient-centered care. Participants described frustration with unclear reporting procedures and not feeling fully heard, while clinicians were trying to balance scientific standards with respectful care. The researchers concluded that clearer communication and stronger person-centered practice could help reduce distrust and misinformation.

That lesson matters in the deepfake era. People often turn to unreliable digital spaces when they feel dismissed or unheard in formal systems. Deepfakes can spread most effectively where trust is already weak.

Trust Is the Real Battleground

This may be the most important point in the whole discussion. Deepfakes do not only threaten truth. They threaten trust.

Public health depends on trust in vaccination campaigns, treatment advice, emergency alerts, and research communication. If people begin to assume that any clip, statement, or warning might be manipulated, every future health message becomes harder to deliver. This is why the findings from Vaccari and Chadwick (2020) matter far beyond politics. Their work suggests that the deeper consequence of deepfakes may be rising uncertainty and cynicism, not just isolated deception.

For health systems, that means the challenge is larger than correcting one false video. It is about preventing a wider erosion of confidence that makes even accurate guidance harder to hear.

Young People and Social Media Judgment

Young people also deserve attention, though not because they are simply more gullible. Freeman et al. (2023), in a systematic review, found that adolescents’ trust in health information on social media is shaped by trust in the platform, the person posting, and the content itself. In other words, they are not only judging facts. They are also responding to social signals such as familiarity, tone, identity, popularity, and perceived expertise.

That makes deepfakes especially concerning in educational and health contexts. Synthetic media can imitate exactly the kinds of cues people already use to decide what feels trustworthy. In this sense, deepfakes do not just create fake content. They exploit the shortcuts people rely on when trying to make sense of overwhelming amounts of information online.

Media Literacy Helps, but It Must Build Discernment

The good news is that media-literacy interventions can help. A 2024 meta-analysis covering 33 studies and 36,256 participants found that media-literacy interventions significantly improved people’s ability to assess fake-news credibility, with gaming-based interventions showing particularly strong effects (Lu et al., 2024). Hwang et al. (2021) also found that media-literacy education reduced the harmful effects of disinformation messages that included deepfake video.

But there is an important caution. In a randomized controlled trial, Lyons et al. (2024) found that a health-focused media-literacy intervention increased skepticism toward both inaccurate and accurate cancer news among U.S. adults. In other words, it made people more doubtful, but not necessarily more discerning.

That is a valuable warning for NGOs, educators, and public-health communicators. The goal is not to teach people to distrust everything. The goal is to help them question better, verify better, and share more responsibly without undermining confidence in legitimate evidence. Good media literacy is not blanket suspicion. It is disciplined judgment.

What This Means for NGOs and Health Communicators

For organizations working in health, education, and public-interest communication, the message is clear. Deepfakes are not just a technology issue. They are a trust issue and a literacy issue.

The most effective response is likely to be preventive rather than purely reactive. That means strengthening public understanding before harmful content goes viral. It means building communication strategies that are clear, fast, culturally aware, and rooted in trusted relationships. It also means treating people’s doubts seriously rather than assuming that misinformation succeeds only because the public is uninformed.

Research repeatedly suggests that vulnerability to misinformation is shaped by broader social conditions. When trust is fragile, the informational environment becomes more dangerous. When people feel unheard, false certainty can become attractive. Deepfakes intensify those pressures because they mimic authority so convincingly.

The health cost of deepfakes is not only measured in false beliefs. It is measured in hesitation, confusion, delayed care, weakened trust, and the growing difficulty of knowing what deserves confidence.

The direct research on health-specific deepfakes is still developing, so exaggerated claims should be avoided. But the broader evidence is already strong enough to justify concern. Health misinformation causes real harm. Deepfakes make falsehood more vivid and harder to detect. And trust, once weakened, is difficult to rebuild.

In a misinformation age, public health depends not just on medicine and expertise, but on people’s ability to recognize credible information when they encounter it. That is why deepfakes matter. They do not only challenge what people believe. They challenge how people decide whom to trust.

Nii Lantey Brtey

CEO, CenRID

Nii Lantey Bortey is an international development professional whose work explores technology governance, ethics, and public policy in the digital age.