Your AI Ethics Position
Enter your information to begin the Capstone Module.
The seven modules you've completed aren't separate topics. They're interconnected dimensions of the same problem: AI amplifies human choices at unprecedented scale, making the ethics of those choices more consequential than ever before.
Every module shares the same underlying pattern: AI takes human decisions that used to affect individuals and executes them at a scale that affects millions. A biased hiring manager rejects 10 applicants. A biased hiring algorithm rejects 10,000. A careless doctor misdiagnoses one patient. A careless AI misdiagnoses thousands. The ethics aren't new. The scale is.
The mark of understanding isn't remembering each module's content. It's seeing how the concepts interact. Tap each connection to see how topics from different modules reinforce each other.
An AI ethics position isn't a vague feeling that "AI should be ethical." It's a specific set of commitments you can articulate and defend. Every position must address these five questions.
Where do you draw the line between beneficial AI and harmful AI?
AI that diagnoses cancer saves lives. AI that predicts crime enables discriminatory policing. AI that writes essays enables learning shortcuts. Where is your boundary — and what principle determines it?
Who should govern AI — and with what authority?
Governments (slow but democratic)? Companies (fast but conflicted)? International bodies (broad but toothless)? Technical experts (knowledgeable but unelected)? Some combination? What powers should they have?
What do we owe to people displaced or harmed by AI?
Workers whose jobs are automated. Communities subjected to biased systems. People whose biometric data is collected without consent. Nothing? Retraining? Compensation? A voice in decisions that affect them?
How much privacy are you willing to trade for security or convenience?
Facial recognition that catches criminals also tracks everyone. Health AI that saves lives requires your medical data. The tradeoff is real. Where is your personal line? Where should the societal line be?
What is your personal responsibility as someone who uses AI?
You use AI tools. You feed them data. You share their outputs. You vote for (or against) people who will regulate them. What obligations does that create? Are you a passive consumer or an active participant in shaping how AI affects society?
Choose the statement that best represents your view on each issue. There are no wrong answers — but each choice reflects a different ethical framework and set of tradeoffs.
In your own words, articulate your position on AI ethics. This isn't a test — it's a synthesis. Draw on anything from the course. Address at least two of the five big questions from Frame 4.
You've been hired as the first "AI Ethics Officer" at a mid-sized healthcare company. The company is preparing to launch an AI diagnostic tool that analyzes patient symptoms and medical history to recommend treatments. In your first week, you discover: (1) the training data underrepresents Black and Hispanic patients, (2) the tool has no human override mechanism — doctors receive recommendations but can't see why, and (3) the CEO wants to launch in 60 days because a competitor is launching a similar product.
What are your top two priorities?
This is Module 3 (bias in deployment), Module 2 (accountability gap), and Module 6 (self-regulation) all at once. Launching with known demographic bias in a healthcare tool means systematically providing worse recommendations to Black and Hispanic patients. No human override means doctors can't catch errors. And "fix it in updates" is the healthcare equivalent of "move fast and break things" — except what breaks is patient health. The EU AI Act classifies healthcare AI as high-risk precisely because these stakes are irreversible.
You've identified the two critical failures: (1) bias that will produce disparate health outcomes by race, and (2) opacity that prevents human oversight. Both must be resolved before deployment. The competitive pressure is real — but your role as ethics officer exists precisely for moments when commercial pressure conflicts with patient safety. Losing a market race is recoverable. Providing biased medical recommendations to vulnerable populations is not.
This course ends, but your relationship with AI ethics doesn't. You will encounter these issues in every career path, in every industry, in every role. Here's what you carry forward.
You now know how to:
Apply three ethical frameworks (utilitarian, deontological, virtue ethics) to any AI situation. Identify where bias enters an AI system and why removing one variable doesn't fix it. Recognize when privacy consent is genuine vs. manufactured. Spot algorithmic management and understand the power asymmetry it creates. Evaluate AI governance proposals and identify when self-regulation is insufficient. Detect the markers of AI-generated content and understand why deepfakes threaten evidence-based institutions. Articulate your own position and defend it with reasoning, not just feelings.
You will be tested — not by an assessment, but by reality:
When your employer adopts AI that affects people's lives, will you ask the right questions? When an AI hiring tool rejects candidates who look different from historical hires, will you raise it? When you're asked to accept privacy terms you know are exploitative, will you pause? When a deepfake circulates and your friends share it without verification, will you be the one who says "wait"?
The people who built the AI systems you've studied in this course are not smarter than you. They're not more ethical than you. They made choices — about what data to use, whose voices to include, what to optimize for, and who to protect — and many of those choices were wrong. The next generation of choices belongs to yours. What you decide to demand, accept, question, and refuse will shape whether AI amplifies the best of human values or the worst. That's not a hypothetical. It's your actual future. And now you're equipped for it.
This is your final AI Lab session. Ask about anything from the entire course — test your position, explore questions you still have, or challenge the AI on its own ethics.
Eight modules. Seven key concepts per module. One integrated position. Here's your toolkit.
Utilitarianism (outcomes), Deontology (rules), Virtue Ethics (character). Most disagreements come from applying different frameworks to the same problem. Name the framework, have a productive debate.
AI amplifies human choices — bias, surveillance, management, misinformation — at a scale that makes individual ethical failures into systemic crises. The ethics aren't new. The consequences are.
Bias + surveillance = discriminatory policing. Governance gaps + workplace AI = unregulated algorithmic management. Deepfakes + accountability = evidence crisis. Real problems live at the intersections.
Self-regulation fails. Ethics boards without enforcement are performative. The auditor and the vendor must be separate. External oversight with enforcement power is the only model that works.
You'll encounter AI ethics in every career, every industry. The people making AI decisions aren't smarter or more ethical than you. Your generation's choices about what to demand, accept, and refuse will shape whether AI serves human values or undermines them.
5 questions drawn from the entire course. You need 80% to pass and earn your certificate.