EduPolicy.ai — Students Edition
FRAME MODE
EDITOR MODE — GATING DISABLED

Module 2

The Ethics Problem

Sections
All Modules
01 — What is AI?02 — The Ethics Problem03 — Bias & Fairness04 — Privacy & Surveillance05 — AI in the Workplace06 — Governance & Regulation07 — Misinformation & Deepfakes08 — Your AI Ethics Position

⚙ Instructor Settings

— OR upload —
No file selected80%
✓ Saved

Welcome to AI Ethics

Enter your information to begin Module 2. Your name personalizes your experience and appears on your completion certificate.

🎓

The Ethics Problem

Students Edition

AI can diagnose cancer, deny loans, write legislation, and generate deepfakes. The technology doesn't know the difference between any of these. That's the problem.

Module 2 of 8 — AI Ethics for Higher Education
Powered by EduPolicy.ai
Part 1

Why AI Creates Ethical Challenges That Didn't Exist Before

A human loan officer who denies someone a mortgage can explain why. A judge who sentences someone can be questioned in open court. A doctor who recommends surgery can describe their reasoning.

AI systems do all three of these things now — and none of them can explain themselves. That's not a bug. It's the nature of how these systems work.

Three tensions sit at the center of every AI ethics problem. Tap each one to see a real-world example.

Power Without Accountability

AI systems make decisions that affect millions. Nobody signed off on each one.
Example: Facebook's algorithm amplified divisive political content to 2.9 billion users because engagement metrics rewarded outrage. No human at Facebook decided to radicalize your uncle. The system optimized for clicks, and radicalization was a side effect.
🏎️

Speed Without Deliberation

AI operates in milliseconds. Ethics requires reflection that takes longer than that.
Example: High-frequency trading algorithms execute thousands of trades per second. In 2010, the "Flash Crash" erased $1 trillion in market value in 36 minutes. By the time humans understood what happened, the damage was done.
🌍

Scale Without Individual Consideration

AI applies the same rule to everyone. Fairness often requires treating people differently.
Example: Amazon built an AI recruiting tool trained on 10 years of hiring data. It learned to penalize resumes containing the word "women's" (as in "women's chess club") because the historical data reflected a male-dominated workforce. It applied this pattern to every applicant equally — which is exactly the problem.
Key Insight

These three tensions — power, speed, and scale — are why "don't be evil" isn't enough. Good intentions don't prevent systemic harm when the system operates beyond human oversight.

Part 2

Three Lenses for Every AI Ethics Problem

For the rest of this course, you'll analyze AI ethics problems through three philosophical frameworks. These aren't just academic — they're the actual arguments used in courtrooms, boardrooms, and legislatures right now.

Utilitarianism
Deontology
Virtue Ethics
Core question:
"Does this produce the greatest good for the greatest number?"
Core question:
"Does this follow a rule that should apply to everyone?"
Core question:
"What would a good person do in this situation?"
Judges by: Outcomes. The ends can justify the means.
Judges by: Rules. Some actions are wrong regardless of outcome.
Judges by: Character. What kind of society are we becoming?
AI example: "Facial recognition reduces crime by 30% — worth the privacy cost."
AI example: "Mass surveillance is wrong even if it prevents crime."
AI example: "A just society doesn't treat its citizens as suspects."
Weakness: Can justify harming minorities if it benefits the majority.
Weakness: Rules can conflict with each other. Which rule wins?
Weakness: "Good person" is subjective and culturally dependent.
Why This Matters

Most real AI ethics debates are actually arguments between people using different frameworks without realizing it. When you can name the framework, you can have a productive disagreement instead of talking past each other.

Interactive Exercise

Apply the Frameworks

Scenario: A hospital uses an AI system to decide which patients receive organ transplants. The AI prioritizes patients with the highest predicted survival rates, which means younger, wealthier patients consistently rank higher.

Each ethical framework responds differently. Tap a response, then tap the framework it belongs to.

Tap & Place Exercise
Tap a response card, then tap the framework slot it matches. Sort all 6 correctly to advance.
This system saves the most life-years overall — that justifies prioritizing survival rates
Every person has equal right to medical care regardless of predicted outcomes
A compassionate society wouldn't let an algorithm decide who lives and dies
The AI allocates scarce organs more efficiently than human committees did
Using wealth as a proxy for survival violates the principle of equal treatment
We should ask what kind of healthcare system we want to be, not just what's efficient
Utilitarian
1
2
Deontological
1
2
Virtue Ethics
1
2
All 6 correct! You can now apply these three lenses to any AI ethics scenario you encounter.
Some items are in the wrong framework. Tap placed tiles to return them, then try again.
Part 3

The Trolley Problem Isn't Hypothetical Anymore

Philosophy students have debated the trolley problem for decades as a thought experiment. AI made it real.

Real Scenario

An autonomous vehicle's braking system detects an unavoidable collision. It can swerve left (hitting a pedestrian) or swerve right (hitting a concrete barrier, injuring the passenger). The AI makes this decision in 0.3 seconds.

A human driver in the same situation acts on instinct. Nobody blames them for the outcome. But the AI's decision was pre-programmed. Someone wrote the code that weighted those options. Someone tested it. Someone approved it.

This creates a fundamentally new ethical problem: moral responsibility for decisions that happen faster than human thought.

HUMAN DRIVER

Acts on instinct. Society doesn't blame panic reactions. "They did the best they could."

AI DRIVER

Acts on code. The decision was made months earlier by an engineer. The outcome was predetermined.

The Hard Question

If the AI's decision was pre-programmed, is it the engineer's fault? The company's? The regulator who approved the car? The customer who bought "self-driving"? Nobody agrees — and that disagreement is the ethics problem.

Part 4

When Accuracy and Fairness Collide

Sometimes an AI system is statistically accurate AND ethically unacceptable at the same time. This exercise makes that tension tangible.

Round 1
An AI hiring tool screens 500 resumes for a software engineering role. It rejects 300 candidates. Analysis shows rejected candidates are disproportionately women (78% of rejected women vs. 42% of rejected men). The AI was trained on 10 years of hiring data from a company that historically employed 85% male engineers.

Is the AI biased?

Yes — the AI is discriminating against women
No — the AI is accurately reflecting the training data
Both — it's accurate AND biased, which is the problem
The answer is both. The AI perfectly learned the patterns in its training data — which included a decade of human hiring bias. It's not "making mistakes." It's faithfully reproducing systemic discrimination with mathematical precision. That's what makes AI bias dangerous: the system is working exactly as designed.
Part 5

Who Is Responsible?

An AI-powered medical imaging system misdiagnoses a tumor as benign. The patient delays treatment by six months. By the time the error is caught, the cancer has spread to stage 4.

Four parties were involved. Use the sliders to assign responsibility. Your total must equal 100%.

Total: 100%
Part 6

Case Study: When AI Decides Who Goes to Prison

COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an AI system used by courts across the United States to predict whether a defendant will commit another crime. Judges use its scores when setting bail, sentencing, and parole.

Click each layer to reveal what went wrong.

1
The Promise

COMPAS was marketed as an objective alternative to human bias in sentencing. It would evaluate risk based on data, not on a judge's mood, prejudice, or fatigue. It generates a risk score from 1-10 for each defendant.

2
The Data Problem

In 2016, ProPublica analyzed 7,000 COMPAS predictions in Broward County, Florida. Black defendants were flagged as "high risk" at nearly twice the rate of white defendants. Black defendants who did NOT reoffend were still labeled high-risk 45% of the time, vs. 23% for white defendants who didn't reoffend.

3
The Opacity Problem

When challenged, Northpointe (the company behind COMPAS) refused to reveal its algorithm, claiming it was proprietary trade secret. Defendants couldn't examine the tool used to determine their freedom. Courts struggled with this — how do you challenge evidence you can't see?

4
The Human Impact

Real people received longer sentences, higher bail, and denied parole based on a score they couldn't examine or contest. The Wisconsin Supreme Court ruled that COMPAS could be used in sentencing but acknowledged it was flawed — and used it anyway. The human cost: incarceration decisions outsourced to an opaque, biased algorithm.

What COMPAS Teaches

Every major AI ethics concern appears in this one case: biased training data, opacity/black box problem, accountability vacuum (who's responsible — the company, the court, the legislature?), and real human harm at scale. This case appears throughout academic literature on AI ethics and is a landmark example you should know.

What Would You Do?

Branching Scenario: The Admissions Algorithm

Stage 1 of 3

You're the Vice President of Academic Affairs at a community college. The admissions office proposes an AI tool that predicts which applicants will complete their degree. They claim it will improve retention rates by 22% and save $1.4 million in wasted financial aid. The vendor offers a free pilot program.

What do you do?

Approve the pilot — improving retention helps students succeed
Approve with conditions — require transparency about what data the AI uses
Decline — predictive tools risk labeling students before they've had a chance
Consider

Approving without conditions means you don't know what the AI uses to make predictions. If it correlates zip code with completion rates, you've effectively built a tool that discriminates by neighborhood income level. The utilitarian argument (better outcomes overall) is real, but so is the risk.

Smart Move

Conditional approval is the most defensible position. You get the potential benefit while protecting students from opaque decision-making. Transparency requirements force the vendor to explain what variables the AI uses — and which ones it doesn't.

Principled Position

The deontological argument: predicting failure before giving someone a chance violates the principle of equal opportunity. This is a defensible position, but it means accepting the current retention rates. The board may push back with: "We lose 400 students a year. You turned down a tool that could save 88 of them."

AI Interaction Lab

Explore AI Ethics With a Live AI

Ask questions about ethical frameworks, accountability, bias, or anything from this module. The AI is scoped to AI ethics topics and will push you to think critically.

Live AI Teaching Assistant20 messages remaining
Module 2 Checkpoint

Your Key Takeaways

Seven concepts from this module that you'll use for the rest of the course.

Three Tensions

Power without accountability, speed without deliberation, scale without individual consideration. Every AI ethics problem stems from at least one.

🔍

Utilitarianism

Greatest good for the greatest number. Judges by outcomes. Can justify harming minorities if it benefits the majority.

📏

Deontology

Rules-based ethics. Some things are wrong regardless of outcome. "Mass surveillance is wrong even if it prevents crime."

🧭

Virtue Ethics

Character-based. "What kind of society are we becoming?" Focuses on who we want to be, not just what we should do.

🎯

Proxy Discrimination

Removing protected attributes doesn't remove bias. Correlated features (zip code, name, employment gaps) carry the same discriminatory patterns.

⚖️

The Accountability Gap

When AI causes harm, responsibility is distributed across developers, deployers, users, and regulators — and often nobody accepts it.

🏛️

COMPAS & Real-World Impact

AI in criminal sentencing demonstrates bias, opacity, and accountability failure in one case. Courts used it despite knowing it was flawed.

Module 2 Assessment

Check Your Understanding

5 questions drawn from the module. You need 80% to pass.

Module Complete

Your Results

0/5
0%

STUDY GUIDE

Download the study guide for this module as a reference.

📄 Download Module 02 Study Guide
1 / 14