EduPolicy.ai — Students Edition
FRAME MODE
EDITOR MODE — GATING DISABLED

Module 1

What is AI?

Sections
All Modules
01 — What is AI?02 — The Ethics Problem03 — Bias & Fairness04 — Privacy & Surveillance05 — AI in the Workplace06 — Governance & Regulation07 — Misinformation & Deepfakes08 — Your AI Ethics Position

⚙ Instructor Settings

— OR upload —
No file selected80%
✓ Saved

Welcome to AI Ethics

Enter your information to begin Module 1. Your name personalizes your experience and appears on your completion certificate.

Please fill in all fields with a valid email.
Your Institution presents
AI Ethics for Higher Education
Students Edition
An interactive course on artificial intelligence fundamentals, ethical reasoning, and responsible AI use in academic settings.
Module 01 of 08
Powered by EduPolicy.ai
Module 01 of 08

What is AI?

Before you can think ethically about artificial intelligence, you need to understand what it is, how it works, and what it isn't.

👁InterpretRead context in data🧠LearnImprove from experience🎯AdaptChange to meet goalsALL THREE REQUIRED = AI
How AI Works — Spam Filter Example

Click each stage to see how a spam filter processes an email:

👁 INTERPRET: An email arrives: "Congratulations! You won $1,000,000! Click here..." The filter scans for suspicious words, sender reputation, and link patterns. It reads context, not just individual words.
Learning Objectives

By the end of this module, you'll be able to explain what makes something AI versus regular software, identify where AI can go wrong and why, and evaluate whether AI-generated information is trustworthy enough to use.

Part 1

Defining AI — The Three-Part Test

Here's the simplest useful definition: a machine has artificial intelligence if it can interpret data, learn from that data, and use what it learned to adapt and achieve specific goals.

Three things. If a system can't do all three, it's not AI — it's software doing what somebody told it to do.

🔢 NOT AI — CALCULATOR

Input 2+2. Output 4. Again? Still 4. A million times — always 4. Same input, same output, forever.

Interpret ✗Learn ✗Adapt ✗
📧 IS AI — SPAM FILTER

Same email format, different result each time. Learns from your clicks. Catches spam it's never seen. Gets better every week.

Interpret ✓Learn ✓Adapt ✓
Key Insight

You've been using AI every day — spam filters, recommendation engines, voice assistants. AI isn't new. What's new is that you can now talk to it.

Interactive Exercise

Sort: AI or Not AI?

Tap & Place Exercise
Tap a technology, then tap a slot to place it. On desktop you can also drag and drop. Sort all 8 items correctly to advance.
Handheld Calculator
Netflix Recommendations
Fixed-Timer Traffic Light
Email Spam Filter
ChatGPT / Claude
Digital Alarm Clock
Phone Autocorrect
Standard Thermostat
✓ IS AI
1
2
3
4
✗ NOT AI
1
2
3
4
All 8 correct! The three-part test — interpret, learn, adapt — is your foundation for everything that follows.
Some items are wrong. Tap any placed tile to return it, then try again.
Part 2

How AI Gets Built

Every ethical problem with AI traces back to decisions made during construction. Understanding the build process is understanding where things go wrong.

1Collect2Prepare3Choose4Train5Evaluate6Deploy↻ iterate

Every AI starts with data. Train a medical AI mostly on data from one demographic group, and it performs worse on everyone else. This has happened with dermatology tools, cardiac risk models, and facial recognition.

Raw data needs cleaning. Humans decide what's "clean" — those decisions carry assumptions baked into the final system.

More complex models learn more patterns but become harder to explain. The more powerful the AI, the less anyone can say why it made a specific decision.

Training a frontier model costs tens of millions. Only a handful of companies can build the most powerful systems. Concentration of power is itself an ethical issue.

Testing reveals whether the model learned genuine patterns or memorized the training set. Big difference between passing a test and understanding the subject.

A flawed system denies loans, misdiagnoses patients, or flags innocent students for cheating. Feedback loops can improve the model or amplify its mistakes.

Part 3

How ChatGPT, Claude, and Gemini Actually Work

Large language models predict what word comes next. That's it. They're prediction engines, not thinking machines.

YOUR PROMPT"What is gravity?"PREDICTION ENGINEBillions of wordrelationships"Most likely next word"OUTPUT"Gravity is a fundamentalforce that...""Convincing" ≠ "Correct"
Try It: Word Prediction

See how an LLM picks the next word by probability:

"The capital of France is ___"

Paris94%
Lyon3%
Berlin1%
Think About This

If AI generates text that sounds like an expert, how do you tell the difference between real expertise and pattern matching that mimics it?

Part 4

Open-Source vs. Closed-Source AI

Not all AI is built or distributed the same way. This has direct consequences for privacy, accountability, and verification.

Examples: Meta's Llama, Mistral, Stability AI

→ Code is public — anyone can inspect, modify, run it

→ Run on your own hardware — data never leaves your machine

→ Researchers can audit for bias — transparency is built in

→ Risk: no built-in guardrails for misuse

Examples: OpenAI GPT-4, Anthropic Claude, Google Gemini

→ Accessible through company's interface only

→ Your prompts go to company servers

→ You trust the company's safety claims

→ Company applies safety filters

Privacy: Open = data stays local. Closed = data goes to servers.

Audit: Open = anyone can inspect. Closed = trust the company.

Safety: Open = no guardrails. Closed = corporate filters.

Access: Open = technical setup. Closed = easy web interface.

Why This Matters To You

If your university picks a closed-source AI tool, every essay a student pastes into it goes to company servers. Open-source keeps data on campus — but someone has to maintain it. That trade-off shapes institutional AI decisions everywhere.

Part 5

AI Hallucinations — Confident and Wrong

AI regularly generates information that sounds authoritative and is completely fabricated. This isn't a bug — it's structural.

Can You Spot the Fake?

Two AI-generated citations. One real, one fabricated. They look identical. Click "Verify" on each:

CITATION A
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots. Proceedings of FAccT '21, 610–623.
CITATION B
Henderson, R. J., & Park, S. L. (2023). Predictive accuracy and ethical constraints in large language model outputs. Journal of Computational Ethics, 14(2), 88–107.
Why This Matters For You

If you use AI for a research paper and it generates a fake citation, you submit false references. Academic integrity policies hold you responsible. "The AI told me" is not a defense.

Branching Scenario

What Would You Do?

Stage 1 — The Temptation
You're writing a research paper due tomorrow. You ask ChatGPT for sources. It returns five citations — perfect APA format, specific page numbers. You haven't verified if they exist.

What do you do?
Use all five — they look legitimate and I'm out of time.
Verify each through Google Scholar or my library database first.
Use them but tell my professor they were AI-generated.
High Risk. This is how students submit fabricated citations. If your professor checks, you face academic dishonesty charges.
Smart move. You check — three are real, two are fabricated. You keep the real ones and find replacements. Continue to Stage 2...
Better, but still risky. You're still submitting unverified citations. Disclosure doesn't make false information true.
Hands-On

AI Interaction Lab

Interact with AI and analyze its behavior. Ask questions you know the answer to. Ask for citations. Watch how confident it sounds regardless of accuracy.

Live AI Interaction

AI-related questions only — won't write essays or discuss off-topic subjects.

How does ChatGPT decide what word comes next?
Cite a study about AI bias — then tell me if it's real
What's the difference between being wrong and hallucinating?
AI
I'm here to help you explore how AI works. Ask me anything — or try to catch me making mistakes.
20 messages remaining
Module Checkpoint

Quick Recap — What You Learned

Before the assessment, here's everything in one place. Tap any card to review.

🎯

The Three-Part Test

AI must interpret data, learn from it, and adapt. All three. Missing one = just software.

🔢

AI vs. Not AI

Calculators, alarm clocks, fixed traffic lights aren't AI. Spam filters, autocorrect, recommendations are.

🔧

The 6-Step Pipeline

Collect → Prepare → Choose → Train → Evaluate → Deploy. Bias enters at every stage.

🧠

LLMs Are Prediction Engines

They predict the next word, not the truth. "Convincing" ≠ "correct."

🔓

Open vs. Closed Source

Open = transparent, local. Closed = convenient, data goes to servers.

⚠️

Hallucinations Are Structural

AI fabricates with identical confidence to real info. You can't tell by looking — only by checking.

⚖️

Your Responsibility

The tool's mistakes are your mistakes. Verify everything. Check your institution's AI policy.

Module Assessment

Your Assessment

5 randomly selected questions. Need 80% (4 of 5) to pass. Each attempt draws different questions.

Module Complete

Your Results

0/5
0%

STUDY GUIDE

Download the study guide for this module as a reference.

📄 Download Module 01 Study Guide
1 / 14