The Ethics Problem
Enter your information to begin Module 2. Your name personalizes your experience and appears on your completion certificate.
A human loan officer who denies someone a mortgage can explain why. A judge who sentences someone can be questioned in open court. A doctor who recommends surgery can describe their reasoning.
AI systems do all three of these things now — and none of them can explain themselves. That's not a bug. It's the nature of how these systems work.
Three tensions sit at the center of every AI ethics problem. Tap each one to see a real-world example.
These three tensions — power, speed, and scale — are why "don't be evil" isn't enough. Good intentions don't prevent systemic harm when the system operates beyond human oversight.
For the rest of this course, you'll analyze AI ethics problems through three philosophical frameworks. These aren't just academic — they're the actual arguments used in courtrooms, boardrooms, and legislatures right now.
Most real AI ethics debates are actually arguments between people using different frameworks without realizing it. When you can name the framework, you can have a productive disagreement instead of talking past each other.
Scenario: A hospital uses an AI system to decide which patients receive organ transplants. The AI prioritizes patients with the highest predicted survival rates, which means younger, wealthier patients consistently rank higher.
Each ethical framework responds differently. Tap a response, then tap the framework it belongs to.
Philosophy students have debated the trolley problem for decades as a thought experiment. AI made it real.
An autonomous vehicle's braking system detects an unavoidable collision. It can swerve left (hitting a pedestrian) or swerve right (hitting a concrete barrier, injuring the passenger). The AI makes this decision in 0.3 seconds.
A human driver in the same situation acts on instinct. Nobody blames them for the outcome. But the AI's decision was pre-programmed. Someone wrote the code that weighted those options. Someone tested it. Someone approved it.
This creates a fundamentally new ethical problem: moral responsibility for decisions that happen faster than human thought.
Acts on instinct. Society doesn't blame panic reactions. "They did the best they could."
Acts on code. The decision was made months earlier by an engineer. The outcome was predetermined.
If the AI's decision was pre-programmed, is it the engineer's fault? The company's? The regulator who approved the car? The customer who bought "self-driving"? Nobody agrees — and that disagreement is the ethics problem.
Sometimes an AI system is statistically accurate AND ethically unacceptable at the same time. This exercise makes that tension tangible.
Is the AI biased?
An AI-powered medical imaging system misdiagnoses a tumor as benign. The patient delays treatment by six months. By the time the error is caught, the cancer has spread to stage 4.
Four parties were involved. Use the sliders to assign responsibility. Your total must equal 100%.
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an AI system used by courts across the United States to predict whether a defendant will commit another crime. Judges use its scores when setting bail, sentencing, and parole.
Click each layer to reveal what went wrong.
COMPAS was marketed as an objective alternative to human bias in sentencing. It would evaluate risk based on data, not on a judge's mood, prejudice, or fatigue. It generates a risk score from 1-10 for each defendant.
In 2016, ProPublica analyzed 7,000 COMPAS predictions in Broward County, Florida. Black defendants were flagged as "high risk" at nearly twice the rate of white defendants. Black defendants who did NOT reoffend were still labeled high-risk 45% of the time, vs. 23% for white defendants who didn't reoffend.
When challenged, Northpointe (the company behind COMPAS) refused to reveal its algorithm, claiming it was proprietary trade secret. Defendants couldn't examine the tool used to determine their freedom. Courts struggled with this — how do you challenge evidence you can't see?
Real people received longer sentences, higher bail, and denied parole based on a score they couldn't examine or contest. The Wisconsin Supreme Court ruled that COMPAS could be used in sentencing but acknowledged it was flawed — and used it anyway. The human cost: incarceration decisions outsourced to an opaque, biased algorithm.
Every major AI ethics concern appears in this one case: biased training data, opacity/black box problem, accountability vacuum (who's responsible — the company, the court, the legislature?), and real human harm at scale. This case appears throughout academic literature on AI ethics and is a landmark example you should know.
You're the Vice President of Academic Affairs at a community college. The admissions office proposes an AI tool that predicts which applicants will complete their degree. They claim it will improve retention rates by 22% and save $1.4 million in wasted financial aid. The vendor offers a free pilot program.
What do you do?
Approving without conditions means you don't know what the AI uses to make predictions. If it correlates zip code with completion rates, you've effectively built a tool that discriminates by neighborhood income level. The utilitarian argument (better outcomes overall) is real, but so is the risk.
Conditional approval is the most defensible position. You get the potential benefit while protecting students from opaque decision-making. Transparency requirements force the vendor to explain what variables the AI uses — and which ones it doesn't.
The deontological argument: predicting failure before giving someone a chance violates the principle of equal opportunity. This is a defensible position, but it means accepting the current retention rates. The board may push back with: "We lose 400 students a year. You turned down a tool that could save 88 of them."
Ask questions about ethical frameworks, accountability, bias, or anything from this module. The AI is scoped to AI ethics topics and will push you to think critically.
Seven concepts from this module that you'll use for the rest of the course.
Power without accountability, speed without deliberation, scale without individual consideration. Every AI ethics problem stems from at least one.
Greatest good for the greatest number. Judges by outcomes. Can justify harming minorities if it benefits the majority.
Rules-based ethics. Some things are wrong regardless of outcome. "Mass surveillance is wrong even if it prevents crime."
Character-based. "What kind of society are we becoming?" Focuses on who we want to be, not just what we should do.
Removing protected attributes doesn't remove bias. Correlated features (zip code, name, employment gaps) carry the same discriminatory patterns.
When AI causes harm, responsibility is distributed across developers, deployers, users, and regulators — and often nobody accepts it.
AI in criminal sentencing demonstrates bias, opacity, and accountability failure in one case. Courts used it despite knowing it was flawed.
5 questions drawn from the module. You need 80% to pass.