EduPolicy.ai — Students Edition
FRAME MODE
EDITOR MODE — GATING DISABLED

Module 6

Governance & Regulation

Sections
All Modules
01 — What is AI?02 — The Ethics Problem03 — Bias & Fairness04 — Privacy & Surveillance05 — AI in the Workplace06 — Governance & Regulation07 — Misinformation & Deepfakes08 — Your AI Ethics Position

⚙ Instructor Settings

— OR upload —
No file selected80%
✓ Saved

Welcome to AI Ethics

Enter your information to begin Module 6.

🎓

Governance & Regulation

Students Edition

AI is transforming every sector of society and nobody elected it. No vote approved facial recognition in your city. No legislature debated whether algorithms should decide who gets a loan. The technology deployed first. Now governments are scrambling to catch up. This module examines who should govern AI, how, and whether it's already too late.

Module 6 of 8 — AI Ethics for Higher Education
Powered by EduPolicy.ai
Part 1

Why "Trust Us" Isn't Working

For most of AI's commercial history, the industry's answer to governance was self-regulation: voluntary ethics principles, internal review boards, and published AI guidelines. The track record is not encouraging.

🔥
Google's AI Ethics Board (2019)
Lasted exactly one week
🗑️
Meta's Responsible AI Team (2023)
Disbanded during "Year of Efficiency"
📋
AI Ethics Principles (Everyone)
Published widely, enforced rarely
⚔️
OpenAI's Governance Crisis (2023)
Board fired the CEO, employees threatened to quit, board capitulated
The Pattern

Self-regulation fails for the same reason in every industry: the entity being regulated profits from the behavior it's supposed to limit. AI companies are not uniquely evil. They face the same structural conflict of interest that led to environmental regulation, financial regulation, and pharmaceutical regulation. The solution in every case was external oversight with enforcement power.

Part 2

The EU AI Act: The World's First AI Law

In 2024, the European Union passed the AI Act — the first comprehensive law governing artificial intelligence anywhere in the world. It uses a risk-based framework: the higher the risk, the stricter the rules.

Tap each tier to see what it covers and what's required.

BANNED
Unacceptable Risk

These are prohibited entirely:

• Government social credit scoring systems (like China's)
• Real-time mass biometric surveillance in public spaces (with limited law enforcement exceptions)
• AI that manipulates human behavior to cause harm (subliminal manipulation)
• Emotion recognition in workplaces and schools
• Predictive policing based solely on profiling

Why banned: These applications pose fundamental threats to human dignity, democracy, and individual autonomy that no amount of regulation can make acceptable.

REGULATED
High Risk

Allowed but heavily regulated:

• AI in hiring and worker management
• Credit scoring and insurance pricing
• Law enforcement (facial recognition, crime prediction)
• Immigration and border control
• Educational testing and student assessment
• Healthcare diagnostics and triage

Requirements: Mandatory risk assessments before deployment, human oversight at all times, detailed technical documentation, bias testing across demographic groups, incident reporting, and registration in an EU database. Fines up to €35 million or 7% of global revenue for violations.

TRANSPARENCY
Limited Risk

Allowed with disclosure requirements:

• Chatbots (must disclose that the user is interacting with AI)
• Deepfake generators (content must be labeled as AI-generated)
• Emotion detection systems (users must be informed)
• AI-generated text used in media (must be disclosed)

Key principle: People have the right to know when they're interacting with AI or consuming AI-generated content.

UNRESTRICTED
Minimal Risk

No specific requirements:

• Spam filters
• Video game AI
• Inventory management systems
• Weather prediction models
• Music recommendation algorithms

The vast majority of AI systems fall here. The Act deliberately avoids regulating low-risk applications to avoid stifling innovation.

The Brussels Effect

Any company serving EU customers must comply — regardless of where the company is based. Just as GDPR became the global privacy standard (it was easier to build one compliant system than maintain separate versions), the AI Act may become the de facto global AI governance standard. American and Chinese AI companies building products for European markets will need to comply.

Part 3

The American Approach: Innovation First

The United States has no comprehensive AI law. Instead, it relies on a patchwork of executive orders, agency guidance, and state-level regulations. The underlying philosophy: don't regulate innovation before you understand it.

OCT 2022
White House publishes AI Bill of Rights — voluntary guidelines, no enforcement mechanism
JAN 2023
NIST releases AI Risk Management Framework — voluntary industry standard
JUL 2023
7 leading AI companies make voluntary safety commitments at White House meeting — no legal obligation
OCT 2023
Biden Executive Order on AI — most comprehensive federal action but not legislation; can be revoked by next president
JUL 2023
NYC Local Law 144 takes effect — first U.S. law requiring bias audits of AI hiring tools
2024-25
California, Colorado, Illinois pass state AI laws. No federal legislation advances. 50 states, 50 potential approaches.
PROPONENTS SAY

"Light regulation keeps America competitive. Premature laws could kill beneficial AI applications before they mature. The market will self-correct — bad AI products will fail."

CRITICS SAY

"'Wait and see' means millions of people are affected by unregulated AI while Congress debates. Voluntary commitments are unenforceable. State-by-state regulation creates chaos for companies and gaps in protection for citizens."

The Enforcement Gap

The U.S. has regulators that COULD govern AI — the FTC (consumer protection), EEOC (employment discrimination), FDA (medical devices), SEC (financial markets) — but none were designed for AI and none have explicit AI authority. They're using 20th-century laws to regulate 21st-century technology. The FTC has brought AI-related enforcement actions, but on a case-by-case basis, not as systemic regulation.

Part 4

Three Models, One Technology, No Agreement

The world's three largest AI powers have fundamentally different governance philosophies. AI companies operating globally must navigate all three simultaneously.

EUROPEAN UNION

Rights-Based

Philosophy: AI must respect fundamental human rights. Regulate first, innovate within boundaries.

Strengths: Strong citizen protection, clear rules, global influence via Brussels Effect.

Criticism: May slow European AI development. Companies may build innovation elsewhere and import compliant products.

UNITED STATES

Innovation-First

Philosophy: Don't constrain innovation. Regulate specific harms as they emerge. The market will find equilibrium.

Strengths: Home to most leading AI companies. Fastest development cycle. Flexibility.

Criticism: Citizens are unprotected while Congress debates. Voluntary commitments are unenforceable. State patchwork creates confusion.

CHINA

State-Directed

Philosophy: AI serves national interests. Promote development for economic and military advantage. Control AI that threatens social stability or party authority.

Strengths: Fast implementation, coordinates industry with state goals, has regulated deepfakes and recommendation algorithms.

Criticism: Regulation protects the state, not citizens. Enables mass surveillance. No independent oversight.

The Fragmentation Problem

No binding global AI treaty exists. An AI system legal in the U.S. might be banned in the EU and mandated in China. Companies building AI for global markets face contradictory requirements. And the people most harmed by AI — often in developing countries used as testing grounds — have the least voice in governance decisions.

Part 5

The Pacing Problem: Law Can't Keep Up

GPT-3 launched in June 2020. GPT-4 launched in March 2023. The EU AI Act was proposed in April 2021 and passed in March 2024 — three years of negotiation. By the time the law was finalized, the technology it was designed to regulate had changed beyond recognition.

AI DEVELOPMENT SPEED

New model every 6-12 months. Capabilities emerge unexpectedly (reasoning, code generation, image creation appeared without being explicitly programmed). Deployment goes from lab to billions of users in weeks (ChatGPT reached 100 million users in 2 months).

LEGISLATIVE SPEED

Years to draft, debate, and pass legislation. Requires consensus among lawmakers who often don't understand the technology. Implementation takes additional years. By the time rules are enforced, the AI landscape has shifted fundamentally.

UNDER-REGULATE
↓ Harm continues
OVER-REGULATE
↓ Innovation stalls
The Dilemma

Regulate too early and you might ban beneficial technology that hasn't been invented yet. Regulate too late and millions of people are harmed while you deliberate. There is no regulation timing that avoids both risks. Every governance decision is a bet on which risk matters more — and reasonable people disagree profoundly.

Part 6

What Would Good AI Governance Actually Look Like?

Despite the challenges, there's growing consensus on a set of governance mechanisms that work — even if no single country has adopted all of them.

📋
Algorithmic Impact Assessments

Before deploying a high-risk AI system, companies must evaluate potential harms — like environmental impact assessments for construction projects. Who could be affected? How? What safeguards exist? Published publicly so affected communities can respond. Canada already requires these for government AI systems.

🔍
Mandatory Auditing

Regular, independent testing of AI systems for bias, accuracy, and disparate impact — similar to financial audits. Not optional. Not self-reported. Conducted by independent third parties with the authority to require changes. NYC's Local Law 144 is the first implementation of this in the U.S.

🚨
Incident Reporting

When AI systems cause harm, companies must report it — like aviation incident reporting or pharmaceutical adverse events. A central database of AI failures creates an evidence base for regulation and lets other organizations learn from mistakes. The EU AI Act requires this for high-risk systems.

👥
Affected Community Participation

People affected by AI systems should have a voice in how those systems are governed — not just technologists and companies. Community input on facial recognition in their neighborhood. Worker participation in automation decisions. Patient input on AI diagnostic tools. Governance without affected voices is governance for the powerful.

⚖️
Right to Explanation & Appeal

When an AI system makes a decision that affects you (denied a loan, rejected for a job, flagged by law enforcement), you should have the right to: know AI was involved, understand the key factors, and challenge the decision before a human. GDPR Article 22 provides this in the EU. No equivalent exists in U.S. federal law.

Interactive Exercise

Design the Regulation

A city wants to deploy AI-powered cameras in public parks to improve safety. Sort each governance requirement into the correct category: what should be required BEFORE deployment vs. what should be required AFTER deployment.

Tap & Place Exercise
Tap a requirement, then tap the column it belongs in. Sort all 6 correctly to advance.
Algorithmic impact assessment evaluating effects on different communities
Quarterly bias audits by independent third party
Public comment period for residents of affected neighborhoods
Mandatory incident reporting when system misidentifies someone
Accuracy testing across racial and age demographic groups
Annual review of whether the system should continue operating
Before Deployment
1
2
3
After Deployment
1
2
3
All 6 correct! Good AI governance requires oversight BOTH before and after deployment — upfront assessment prevents foreseeable harm, ongoing auditing catches problems that emerge in practice.
Some items are in the wrong column. Tap placed tiles to return them, then try again.
What Would You Do?

Branching Scenario: The State Legislature

Stage 1 of 3

You're a legislative aide in your state capitol. A bill is proposed requiring all AI systems used in government decision-making (benefits eligibility, parole decisions, child welfare assessments) to pass an independent bias audit before deployment. The tech industry lobbies against it, arguing it would slow government modernization by 2-3 years. Civil rights organizations support it, citing documented cases of AI bias in government systems.

What do you recommend to your legislator?

Support the bill — government AI affecting people's lives must be tested for bias before deployment
Oppose the bill — a 2-3 year delay means government services stay inefficient while people wait
Rights-Based Reasoning

Government AI systems determine who receives benefits, who goes to prison, and which families are investigated by child services. These decisions directly affect fundamental rights. The 2-3 year delay is real — but deploying biased systems means systematically disadvantaging vulnerable populations for years until someone catches the problem. Pre-deployment auditing is cheaper than post-deployment lawsuits, and more importantly, it prevents harm rather than compensating for it after the fact.

The Efficiency Argument Has Costs

The urgency is real — government services are often slow and underfunded. But "move fast" with AI that determines parole or child welfare has a very different risk profile than "move fast" with a new scheduling tool. A biased parole algorithm kept in production for speed means real people stay in prison longer because of their race. The efficiency gain doesn't offset that cost — and it doesn't have to, because the audit can be built into the development timeline rather than added after.

AI Interaction Lab

Explore AI Governance With a Live AI

Ask about AI regulation, the EU AI Act, governance challenges, or anything from this module.

Live AI Teaching Assistant20 messages remaining
Module 6 Checkpoint

Your Key Takeaways

Seven governance concepts to carry forward.

🔥

Self-Regulation Failed

Google's ethics board: 1 week. Meta's responsible AI team: disbanded. 167 organizations published AI ethics principles. None enforced them. The conflict of interest is structural.

🇪🇺

EU AI Act

World's first comprehensive AI law. Risk-based: bans social scoring and mass biometric surveillance, heavily regulates hiring/credit/law enforcement AI, requires transparency for chatbots and deepfakes. Fines up to 7% of global revenue.

🇺🇸

U.S.: No Federal AI Law

Executive orders (revocable), voluntary commitments (unenforceable), agency guidance (limited authority), state patchwork (inconsistent). NYC Local Law 144 is the first mandatory AI hiring audit.

🌍

Three Models, No Agreement

EU: rights-based. U.S.: innovation-first. China: state-directed. No global AI treaty. Companies navigate contradictory requirements. Developing nations have the least voice.

⏱️

The Pacing Problem

AI evolves every 6-12 months. Laws take 3+ years. ChatGPT reached 100M users in 2 months. By the time regulation passes, the technology has transformed. Every governance timing is a gamble.

📋

Algorithmic Impact Assessments

Evaluate potential harms BEFORE deployment. Like environmental impact assessments. Canada requires them for government AI. Public comment periods let affected communities respond.

🔍

Independent Auditing

The auditor and the vendor must be separate. Companies can't audit their own AI, just as accounting firms can't audit their own clients. Independence is the foundation of credible oversight.

Module 6 Assessment

Check Your Understanding

5 questions drawn from the module. You need 80% to pass.

Module Complete

Your Results

0/5
0%

STUDY GUIDE

Download the study guide for this module as a reference.

📄 Download Module 06 Study Guide
1 / 14