EduPolicy.ai — Students Edition
FRAME MODE
EDITOR MODE — GATING DISABLED

Module 7

Misinformation & Deepfakes

Sections
All Modules
01 — What is AI?02 — The Ethics Problem03 — Bias & Fairness04 — Privacy & Surveillance05 — AI in the Workplace06 — Governance & Regulation07 — Misinformation & Deepfakes08 — Your AI Ethics Position

⚙ Instructor Settings

— OR upload —
No file selected80%
✓ Saved

Welcome to AI Ethics

Enter your information to begin Module 7.

🎓

Misinformation & Deepfakes

Students Edition

You can no longer trust what you see, hear, or read. AI can generate a video of any person saying anything. It can write a news article indistinguishable from human journalism. It can clone a voice from a five-second sample. This module examines what happens to truth when fabrication becomes free, instant, and perfect.

Module 7 of 8 — AI Ethics for Higher Education
Powered by EduPolicy.ai
Part 1

Disinformation at Machine Speed

Propaganda isn't new. Governments and political movements have manipulated information for centuries. What AI changes is the economics: creating convincing false content used to require skill, time, and resources. Now it requires a prompt.

TRADITIONAL DISINFORMATION

State propaganda bureau writes articles manually. One message broadcast to millions. Same content for everyone. Slow to produce. Detectable patterns in writing style. Requires human infrastructure — writers, translators, distributors.

AI-POWERED DISINFORMATION

AI generates thousands of unique articles per hour. Each message personalized to individual psychological profiles. Different content for every target. Instant production. No consistent writing style to detect. One person with a laptop replaces an entire propaganda department.

The Three Multipliers
📈
Scale
One operator generates more content than a newsroom of 100 journalists
🎯
Personalization
Each message crafted for one person's fears, beliefs, and biases
Speed
False narratives spread before fact-checkers finish their first paragraph
The Asymmetry

A lie takes seconds to generate. Debunking it takes hours of human research. AI broke the economics of truth: fabrication is now infinitely cheaper and faster than verification. This asymmetry is the core threat — not any single piece of misinformation, but the collapse of the entire correction mechanism.

Part 2

Deepfakes: Seeing Is No Longer Believing

A deepfake is AI-generated synthetic media — video, audio, or images — that convincingly depicts events that never happened. The technology has evolved from obvious forgeries to near-perfect simulations in under five years.

🎬
Video Deepfakes

AI maps one person's face onto another's body in video, matching lighting, skin texture, and micro-expressions. In 2022, a deepfake video of Ukrainian President Zelensky telling soldiers to surrender circulated during the Russian invasion. In 2024, a deepfake CFO on a video call convinced a finance worker to transfer $25 million. The technology requires only a few minutes of reference video to create a convincing fake.

🎙️
Voice Cloning

Modern voice cloning needs only 3-5 seconds of sample audio. The AI replicates tone, cadence, accent, and emotional inflection. Scammers have used cloned voices to impersonate family members in "emergency" calls: "Mom, I'm in trouble, I need you to wire money." The FTC reported a surge in AI voice scam complaints starting in 2023. A cloned voice of a CEO was used to authorize a $243,000 fraudulent wire transfer in 2019 — before the technology even matured.

🖼️
AI-Generated Images

AI image generators create photorealistic images from text prompts. A fake image of Pope Francis in a white puffer jacket went viral in March 2023 — millions believed it was real. AI-generated images of Trump being arrested circulated before his actual indictment. During the 2024 election cycle, AI-generated images of candidates in fabricated scenarios appeared daily. The images are indistinguishable from photographs to the untrained eye.

📝
AI-Generated Text

AI writes articles, social media posts, reviews, and comments that are indistinguishable from human writing. A single operator can generate thousands of unique "news articles" per day, each with different phrasing, structure, and style. AI-generated product reviews, academic papers with fabricated data, and fake news sites are already flooding the internet. Detection tools exist but are increasingly unreliable as the generated text improves.

The Accessibility Problem

Deepfake creation tools are free, open-source, and require no technical expertise. A teenager with a laptop can create a convincing deepfake in 30 minutes. This isn't a state-actor threat — it's an everyone threat. The technology that was once limited to movie studios is now available to anyone with an internet connection.

Part 3

The Liar's Dividend

The most dangerous effect of deepfakes isn't the fake content that gets created. It's what happens to real content.

Before deepfakes: "The video shows it happened."

After deepfakes: "That video could be AI-generated. You can't prove it's real."

This is the liar's dividend: when deepfakes exist, ALL video evidence becomes questionable. A politician caught on camera making a racist remark can claim the video is fabricated. A defendant in a trial can argue that surveillance footage was AI-generated. A whistleblower's video evidence can be dismissed as a deepfake.

The liar's dividend doesn't require anyone to actually create a deepfake. The mere existence of the technology provides plausible deniability for anyone confronted with genuine evidence. The cost of deepfakes isn't just the lies that get created — it's the truths that get dismissed.

ALREADY HAPPENING

In multiple countries, politicians have dismissed unflattering real recordings by claiming they were AI-generated. Defense attorneys have begun challenging video evidence in court by raising the possibility of deepfakes. Verified real footage is being met with "prove it's not AI."

THE DEEPER PROBLEM

Courts, journalism, science, and democracy all depend on the concept of evidence — verifiable records of what actually happened. If any piece of evidence can be dismissed as AI-generated, the entire evidence-based system weakens. This isn't hypothetical. It's happening now.

Part 4

AI Threats to Academic Integrity

The misinformation problem isn't abstract for students — it's sitting in your classroom right now.

📚
Fabricated Citations

AI generates academic citations that look completely real — plausible author names, real-sounding journal titles, specific page numbers and DOIs — but reference papers that don't exist. A 2023 study found that ChatGPT fabricated citations in over 50% of academic queries. The citations follow proper APA or MLA format perfectly. The only way to catch them is to verify each one individually — a process most readers and even many reviewers don't do.

Why this matters for you: If you use AI to help with research and don't verify every citation, you may submit work with fake sources. This is academic dishonesty — and the fact that the AI made the error doesn't protect you.

🧪
Fabricated Research Data

AI can generate realistic-looking datasets, statistical results, and experimental findings. In 2023, multiple papers with suspected AI-generated data were retracted from peer-reviewed journals. One investigation found an entire "paper mill" using AI to generate hundreds of fake research papers submitted to academic journals. The generated data passes basic statistical checks because the AI understands what plausible data looks like.

✍️
AI-Written Assignments

AI generates essays, reports, and analysis that are increasingly difficult to distinguish from student writing. Detection tools (GPTZero, Turnitin's AI detector) have high false-positive rates — flagging genuine student work as AI-generated, and missing sophisticated AI-generated text. This creates a double bind: students who use AI risk undetected dishonesty, while students who don't risk being falsely accused.

The Trust Infrastructure

Academic scholarship depends on a chain of trust: researchers cite sources → readers trust the citations are real → knowledge builds on verified foundations. AI-generated citations attack this chain at its root. If you can't trust that a cited source exists, the entire structure of academic knowledge becomes unstable. This is why citation fabrication isn't just cheating — it's an attack on the infrastructure of knowledge itself.

Part 5

The Epistemic Crisis: What Happens When Nobody Agrees on Reality?

Democracy requires that citizens share a basic understanding of reality. Voters need to agree on what happened — even if they disagree about what to do about it. AI-generated content attacks this shared foundation.

The Dead Internet Theory

An increasing percentage of online content — comments, reviews, articles, social media posts — is generated by AI bots rather than humans. Some estimates suggest that over 50% of web traffic is already non-human. As AI-generated content floods every platform, authentic human discourse becomes harder to find, harder to identify, and eventually harder to trust. You may already be arguing online with a bot without knowing it.

The Marketplace of Ideas — Broken

The classic defense of free speech assumes that truth will prevail in open debate — bad ideas lose to good ones through competition. This assumes rough parity between truth and falsehood. AI breaks this assumption: fabrication is now infinitely cheaper and faster than verification. One person can generate more false content in an hour than every fact-checker on earth can debunk in a year. When lies are free and truth is expensive, the marketplace isn't competitive — it's flooded.

The Stakes

An epistemic crisis doesn't mean "people believe wrong things." It means people lose the ability to distinguish truth from falsehood entirely — and eventually stop trying. When every source is potentially AI-generated, when every video might be a deepfake, when every "expert" might be a bot, the rational response is to trust nothing. And a society where nobody trusts anything is a society that can't function — can't hold elections, can't conduct trials, can't do science, can't maintain public health.

Part 6

Fighting Back: What Works (and What Doesn't)

The misinformation crisis has no single solution. Every countermeasure has limitations. But the combination of technical, educational, and regulatory approaches can slow the damage.

TECHNICAL

Content Provenance (C2PA)

Cryptographic metadata embedded at creation that records origin, edits, and AI involvement — like a chain of custody for digital media. Adopted by Adobe, Microsoft, BBC, Sony, Nikon. Cameras embed provenance data at the moment of capture. If the metadata is missing or tampered with, the content is flagged.

Limitation: Only works if adopted universally. Content without provenance isn't necessarily fake — it might just be from a non-participating source.

TECHNICAL

AI Detection Tools

Algorithms that analyze content for statistical patterns typical of AI generation — word frequency distributions, pixel-level artifacts in images, audio spectral signatures.

Limitation: Arms race. Detection improves → generators adapt → detection becomes unreliable → new detection methods emerge → generators adapt again. Current detection tools have error rates of 10-30% — high enough to be unreliable in any high-stakes context.

EDUCATIONAL

Media Literacy

Teaching people to critically evaluate sources, check provenance, reverse-image search, and verify claims before sharing. Finland's media literacy curriculum is the global model — integrated into schools from age 6.

Limitation: Can't scale to match the volume of AI content. Even highly literate people can be fooled by sophisticated deepfakes. And literacy doesn't help when the content is indistinguishable from reality.

REGULATORY

Disclosure Laws

The EU AI Act requires labeling AI-generated content. China requires deepfakes to carry visible watermarks. Several U.S. states have passed laws against deepfakes in elections and non-consensual intimate imagery.

Limitation: Bad actors don't follow labeling laws. Cross-border enforcement is nearly impossible. A deepfake created in country A, hosted in country B, and viewed in country C falls under which jurisdiction?

The Uncomfortable Truth

No combination of technology, education, and law can fully solve the AI misinformation problem. The asymmetry is structural: creating false content will always be cheaper than verifying it. The goal isn't to eliminate misinformation — it's to preserve enough trusted channels and verification infrastructure that society can still function. That's a lower bar than we'd like, and it's not guaranteed.

Interactive Exercise

Information Triage

You encounter each piece of content online. Sort each into the correct response category: what you should do when encountering potentially AI-generated content.

Tap & Place Exercise
Tap a scenario, then tap the appropriate response. Sort all 6 correctly to advance.
A news article with a shocking headline from a site you've never heard of
A video of a political candidate saying something inflammatory with no source attribution
An academic paper cited by a classmate with authors and journal you can't find online
A voice message from "your bank" asking you to confirm account details urgently
An article from a verified news outlet with named reporters and cited sources you can check
A research finding published in a peer-reviewed journal you can access through your library
Trust (Verified Source)
1
2
Verify Before Sharing
1
2
Flag as Suspicious
1
2
All 6 correct! The key skill: trust verified sources, verify before sharing unconfirmed content, and flag content with manipulation indicators (no attribution, urgency pressure, inflammatory framing).
Some items are in the wrong column. Tap placed tiles to return them, then try again.
What Would You Do?

Branching Scenario: The Viral Video

Stage 1 of 3

A video circulates on social media showing your college's president making racist remarks at what appears to be a private dinner. The video looks authentic — correct voice, mannerisms, and setting. Students are outraged. A petition demanding resignation gets 2,000 signatures in 3 hours. The president's office has not responded yet.

What should students do before acting on this video?

Demand immediate resignation — the video speaks for itself
Wait for verification — demand the administration confirm or deny the video's authenticity before taking action
The Deepfake Trap

Acting on unverified video evidence is exactly what a deepfake is designed to provoke. If the video is real, a few hours of verification won't change the outcome — the president's words are still on record. But if it's fake, acting before verification causes irreparable harm to a real person based on fabricated evidence. In the deepfake era, speed of reaction is the attacker's advantage and the victim's vulnerability.

Due Diligence

Demanding verification isn't defending the president — it's defending the principle that consequences should follow from evidence, not appearances. Key verification steps: check if the video has provenance metadata, look for the original source, see if any credible news outlet has independently confirmed it, and ask whether forensic analysis has been performed. This takes hours, not weeks. Outrage can wait for evidence.

AI Interaction Lab

Explore Misinformation & Deepfakes With a Live AI

Ask about deepfakes, content provenance, epistemic crisis, or anything from this module.

Live AI Teaching Assistant20 messages remaining
Module 7 Checkpoint

Your Key Takeaways

Seven concepts about truth in the age of AI.

📈

The Asymmetry

Fabrication is infinitely cheaper and faster than verification. One person generates more false content than every fact-checker on earth can debunk. AI broke the economics of truth.

🎬

Deepfake Capabilities

Video face-swapping, voice cloning from 5 seconds of audio, photorealistic image generation, indistinguishable AI text. Free tools, no expertise needed. A teenager with a laptop in 30 minutes.

🤥

The Liar's Dividend

When deepfakes exist, ALL evidence becomes questionable. Real footage can be dismissed as fabricated. The technology benefits liars even when no deepfake is created.

📚

AI Citation Fabrication

AI generates citations with real-sounding authors and journals that don't exist. Over 50% of AI-generated academic citations are fabricated. Verify every citation individually.

🌐

Epistemic Crisis

When AI generates unlimited false content, shared reality fractures. The marketplace of ideas fails because lies are free and truth is expensive. Society can't function when nobody trusts anything.

🔐

Content Provenance (C2PA)

Cryptographic chain of custody for digital media. Records origin, edits, and AI involvement. Adopted by Adobe, Microsoft, BBC, Sony. The most promising technical countermeasure.

🧠

Continued Influence Effect

Debunking never fully erases the original impression. Emotional first reactions create stronger memories than corrections. Prevention beats debunking every time.

Module 7 Assessment

Check Your Understanding

5 questions drawn from the module. You need 80% to pass.

Module Complete

Your Results

0/5
0%

STUDY GUIDE

Download the study guide for this module as a reference.

📄 Download Module 07 Study Guide
1 / 14