Governance & Regulation
Enter your information to begin Module 6.
For most of AI's commercial history, the industry's answer to governance was self-regulation: voluntary ethics principles, internal review boards, and published AI guidelines. The track record is not encouraging.
Self-regulation fails for the same reason in every industry: the entity being regulated profits from the behavior it's supposed to limit. AI companies are not uniquely evil. They face the same structural conflict of interest that led to environmental regulation, financial regulation, and pharmaceutical regulation. The solution in every case was external oversight with enforcement power.
In 2024, the European Union passed the AI Act — the first comprehensive law governing artificial intelligence anywhere in the world. It uses a risk-based framework: the higher the risk, the stricter the rules.
Tap each tier to see what it covers and what's required.
These are prohibited entirely:
• Government social credit scoring systems (like China's)
• Real-time mass biometric surveillance in public spaces (with limited law enforcement exceptions)
• AI that manipulates human behavior to cause harm (subliminal manipulation)
• Emotion recognition in workplaces and schools
• Predictive policing based solely on profiling
Why banned: These applications pose fundamental threats to human dignity, democracy, and individual autonomy that no amount of regulation can make acceptable.
Allowed but heavily regulated:
• AI in hiring and worker management
• Credit scoring and insurance pricing
• Law enforcement (facial recognition, crime prediction)
• Immigration and border control
• Educational testing and student assessment
• Healthcare diagnostics and triage
Requirements: Mandatory risk assessments before deployment, human oversight at all times, detailed technical documentation, bias testing across demographic groups, incident reporting, and registration in an EU database. Fines up to €35 million or 7% of global revenue for violations.
Allowed with disclosure requirements:
• Chatbots (must disclose that the user is interacting with AI)
• Deepfake generators (content must be labeled as AI-generated)
• Emotion detection systems (users must be informed)
• AI-generated text used in media (must be disclosed)
Key principle: People have the right to know when they're interacting with AI or consuming AI-generated content.
No specific requirements:
• Spam filters
• Video game AI
• Inventory management systems
• Weather prediction models
• Music recommendation algorithms
The vast majority of AI systems fall here. The Act deliberately avoids regulating low-risk applications to avoid stifling innovation.
Any company serving EU customers must comply — regardless of where the company is based. Just as GDPR became the global privacy standard (it was easier to build one compliant system than maintain separate versions), the AI Act may become the de facto global AI governance standard. American and Chinese AI companies building products for European markets will need to comply.
The United States has no comprehensive AI law. Instead, it relies on a patchwork of executive orders, agency guidance, and state-level regulations. The underlying philosophy: don't regulate innovation before you understand it.
"Light regulation keeps America competitive. Premature laws could kill beneficial AI applications before they mature. The market will self-correct — bad AI products will fail."
"'Wait and see' means millions of people are affected by unregulated AI while Congress debates. Voluntary commitments are unenforceable. State-by-state regulation creates chaos for companies and gaps in protection for citizens."
The U.S. has regulators that COULD govern AI — the FTC (consumer protection), EEOC (employment discrimination), FDA (medical devices), SEC (financial markets) — but none were designed for AI and none have explicit AI authority. They're using 20th-century laws to regulate 21st-century technology. The FTC has brought AI-related enforcement actions, but on a case-by-case basis, not as systemic regulation.
The world's three largest AI powers have fundamentally different governance philosophies. AI companies operating globally must navigate all three simultaneously.
Philosophy: AI must respect fundamental human rights. Regulate first, innovate within boundaries.
Strengths: Strong citizen protection, clear rules, global influence via Brussels Effect.
Criticism: May slow European AI development. Companies may build innovation elsewhere and import compliant products.
Philosophy: Don't constrain innovation. Regulate specific harms as they emerge. The market will find equilibrium.
Strengths: Home to most leading AI companies. Fastest development cycle. Flexibility.
Criticism: Citizens are unprotected while Congress debates. Voluntary commitments are unenforceable. State patchwork creates confusion.
Philosophy: AI serves national interests. Promote development for economic and military advantage. Control AI that threatens social stability or party authority.
Strengths: Fast implementation, coordinates industry with state goals, has regulated deepfakes and recommendation algorithms.
Criticism: Regulation protects the state, not citizens. Enables mass surveillance. No independent oversight.
No binding global AI treaty exists. An AI system legal in the U.S. might be banned in the EU and mandated in China. Companies building AI for global markets face contradictory requirements. And the people most harmed by AI — often in developing countries used as testing grounds — have the least voice in governance decisions.
GPT-3 launched in June 2020. GPT-4 launched in March 2023. The EU AI Act was proposed in April 2021 and passed in March 2024 — three years of negotiation. By the time the law was finalized, the technology it was designed to regulate had changed beyond recognition.
New model every 6-12 months. Capabilities emerge unexpectedly (reasoning, code generation, image creation appeared without being explicitly programmed). Deployment goes from lab to billions of users in weeks (ChatGPT reached 100 million users in 2 months).
Years to draft, debate, and pass legislation. Requires consensus among lawmakers who often don't understand the technology. Implementation takes additional years. By the time rules are enforced, the AI landscape has shifted fundamentally.
Regulate too early and you might ban beneficial technology that hasn't been invented yet. Regulate too late and millions of people are harmed while you deliberate. There is no regulation timing that avoids both risks. Every governance decision is a bet on which risk matters more — and reasonable people disagree profoundly.
Despite the challenges, there's growing consensus on a set of governance mechanisms that work — even if no single country has adopted all of them.
Before deploying a high-risk AI system, companies must evaluate potential harms — like environmental impact assessments for construction projects. Who could be affected? How? What safeguards exist? Published publicly so affected communities can respond. Canada already requires these for government AI systems.
Regular, independent testing of AI systems for bias, accuracy, and disparate impact — similar to financial audits. Not optional. Not self-reported. Conducted by independent third parties with the authority to require changes. NYC's Local Law 144 is the first implementation of this in the U.S.
When AI systems cause harm, companies must report it — like aviation incident reporting or pharmaceutical adverse events. A central database of AI failures creates an evidence base for regulation and lets other organizations learn from mistakes. The EU AI Act requires this for high-risk systems.
People affected by AI systems should have a voice in how those systems are governed — not just technologists and companies. Community input on facial recognition in their neighborhood. Worker participation in automation decisions. Patient input on AI diagnostic tools. Governance without affected voices is governance for the powerful.
When an AI system makes a decision that affects you (denied a loan, rejected for a job, flagged by law enforcement), you should have the right to: know AI was involved, understand the key factors, and challenge the decision before a human. GDPR Article 22 provides this in the EU. No equivalent exists in U.S. federal law.
A city wants to deploy AI-powered cameras in public parks to improve safety. Sort each governance requirement into the correct category: what should be required BEFORE deployment vs. what should be required AFTER deployment.
You're a legislative aide in your state capitol. A bill is proposed requiring all AI systems used in government decision-making (benefits eligibility, parole decisions, child welfare assessments) to pass an independent bias audit before deployment. The tech industry lobbies against it, arguing it would slow government modernization by 2-3 years. Civil rights organizations support it, citing documented cases of AI bias in government systems.
What do you recommend to your legislator?
Government AI systems determine who receives benefits, who goes to prison, and which families are investigated by child services. These decisions directly affect fundamental rights. The 2-3 year delay is real — but deploying biased systems means systematically disadvantaging vulnerable populations for years until someone catches the problem. Pre-deployment auditing is cheaper than post-deployment lawsuits, and more importantly, it prevents harm rather than compensating for it after the fact.
The urgency is real — government services are often slow and underfunded. But "move fast" with AI that determines parole or child welfare has a very different risk profile than "move fast" with a new scheduling tool. A biased parole algorithm kept in production for speed means real people stay in prison longer because of their race. The efficiency gain doesn't offset that cost — and it doesn't have to, because the audit can be built into the development timeline rather than added after.
Ask about AI regulation, the EU AI Act, governance challenges, or anything from this module.
Seven governance concepts to carry forward.
Google's ethics board: 1 week. Meta's responsible AI team: disbanded. 167 organizations published AI ethics principles. None enforced them. The conflict of interest is structural.
World's first comprehensive AI law. Risk-based: bans social scoring and mass biometric surveillance, heavily regulates hiring/credit/law enforcement AI, requires transparency for chatbots and deepfakes. Fines up to 7% of global revenue.
Executive orders (revocable), voluntary commitments (unenforceable), agency guidance (limited authority), state patchwork (inconsistent). NYC Local Law 144 is the first mandatory AI hiring audit.
EU: rights-based. U.S.: innovation-first. China: state-directed. No global AI treaty. Companies navigate contradictory requirements. Developing nations have the least voice.
AI evolves every 6-12 months. Laws take 3+ years. ChatGPT reached 100M users in 2 months. By the time regulation passes, the technology has transformed. Every governance timing is a gamble.
Evaluate potential harms BEFORE deployment. Like environmental impact assessments. Canada requires them for government AI. Public comment periods let affected communities respond.
The auditor and the vendor must be separate. Companies can't audit their own AI, just as accounting firms can't audit their own clients. Independence is the foundation of credible oversight.
5 questions drawn from the module. You need 80% to pass.