AI in the Workplace
Enter your information to begin Module 5.
Every wave of technology has eliminated some jobs and created others. What makes AI different is that it threatens cognitive work — not just physical labor. A factory robot replaces hands. An AI replaces judgment, analysis, and decision-making.
AI doesn't replace "jobs" — it replaces tasks. A job with 80% routine tasks and 20% creative judgment will be restructured: the routine part gets automated, and the remaining 20% becomes the entire job — done by fewer people, at higher skill requirements, often at the same pay.
For millions of workers, AI doesn't just assist — it manages. It assigns tasks, monitors performance, and makes disciplinary decisions. No human manager involved.
In every case, the pattern is the same: the algorithm sees everything about the worker. The worker sees nothing about the algorithm. They don't know exactly what metrics are tracked, how the scores are calculated, or what thresholds trigger consequences. This information asymmetry is the defining ethical problem of algorithmic management — total transparency in one direction, total opacity in the other.
Workplace monitoring existed before AI — managers walking the floor, checking timecards, reviewing output. What's new is continuous, granular, automated surveillance that captures everything and forgets nothing.
In the United States, most of this is legal. The Electronic Communications Privacy Act (1986) allows employers to monitor electronic communications on company-owned systems. Most states don't require employers to disclose monitoring to employees. A few states (Connecticut, Delaware, New York) require notification, but not consent.
"We need to ensure productivity, protect company data, prevent harassment, and comply with regulations. Monitoring helps us manage effectively, especially with remote teams."
"Constant surveillance destroys trust, increases stress, reduces job satisfaction, and creates a culture of fear. Studies show monitored workers are more anxious and less creative. The cure is worse than the disease."
A 2023 Harvard Business Review study found that employees who knew they were being digitally monitored were more likely to break rules, not less — because surveillance eroded their sense of moral responsibility. When people feel they're not trusted, they stop self-regulating and start gaming the metrics instead.
Before a human recruiter sees your resume, an AI has probably already decided whether to reject it. An estimated 75% of large employers use automated screening tools. You're being evaluated by algorithms before you ever shake a hand.
HireVue's AI analyzed candidates' facial expressions, eye movements, tone of voice, and word choice during video interviews to generate a "hirability score." Over 100 companies used it on millions of candidates. Problems: (1) no peer-reviewed evidence that facial expressions predict job performance, (2) people with disabilities, non-native speakers, and neurodiverse candidates scored lower, (3) candidates had no way to know what the AI was evaluating or how to appeal.
In 2021, under pressure from the FTC and advocacy groups, HireVue dropped its facial analysis feature — but kept voice and language analysis. The company acknowledged that the facial component couldn't be scientifically validated.
AI hiring tools trained on historical data learn to prefer candidates who look like past successful hires — perpetuating demographic homogeneity. If your company has historically hired mostly white men from elite universities, the AI will learn that "white male from elite university" predicts success — because that's what the data shows, not because it's true.
AI makes workers more productive. That's the promise. But where do the gains go?
Source: Economic Policy Institute, based on Bureau of Labor Statistics data
Since 1979, American workers have become 65% more productive. Their compensation increased 17%. The gap — nearly 50 percentage points — represents economic value that was created by workers but captured by shareholders and executives. AI is accelerating this pattern.
"AI will make workers more productive, creating more wealth for everyone. New jobs will replace old ones. The economy will grow."
"AI increases output per worker, but the gains flow to capital owners. Remaining workers do more for the same pay. Displaced workers face lower wages in new roles. Inequality widens."
The debate about AI in the workplace almost always focuses on "will it take my job?" The better question is: even if you keep your job, will you share in the value your AI-augmented work creates? History says no — unless workers organize, negotiate, or legislate for it.
When a company automates 500 jobs, it generates millions in cost savings. What obligation, if any, does it have to the 500 workers who lost their livelihoods?
"Creative destruction is how economies progress. Workers displaced by AI will find new jobs, just like workers displaced by previous technology waves. Government intervention distorts markets."
Framework: Utilitarian — aggregate benefit outweighs individual cost.
"The benefits of automation shouldn't come entirely at workers' expense. Companies that profit from AI have an obligation to: fund retraining programs, provide transition income, give workers a voice in automation decisions, and share productivity gains."
Framework: Deontological — workers have inherent dignity that can't be traded for efficiency.
Automation taxes: Bill Gates and others have proposed taxing companies that replace workers with AI — using the revenue to fund retraining. South Korea implemented a version in 2018 by reducing tax incentives for automation investment.
Universal Basic Income (UBI): If AI eliminates enough jobs, some argue everyone should receive a baseline income regardless of employment. Pilot programs in Finland, Stockton (CA), and Kenya show mixed but promising results.
Worker data rights: The EU's AI Act requires employers to inform workers when AI is used in management decisions. NYC's Local Law 144 requires annual bias audits of AI hiring tools. These are early steps toward algorithmic accountability in the workplace.
A regional bank plans to automate its customer service department. 200 call center workers will be replaced by an AI chatbot. The bank projects $8 million in annual savings.
Sort each consideration into the correct category: reasons automation is justified vs. ethical obligations the bank must address.
You've just been hired at a marketing agency. On your first day, your manager introduces you to "CopyAI" — an AI writing tool the company uses for all first drafts of client content. Your job is to edit and refine what the AI produces, not to write from scratch. Your manager says: "This makes us 3x more productive. Clients love the turnaround time."
How do you feel about this arrangement?
This is the best-case scenario for AI augmentation: the tool handles routine production, you provide creative judgment, and the client gets faster results. But consider: if AI handles first drafts, you never develop the ability to create from nothing. Your skill becomes editing, not writing. What happens to your career if the next AI version doesn't need editors?
You've identified a real risk. Workers who become dependent on AI tools for core tasks may lose the ability to perform those tasks independently. If the AI improves enough to not need editing, your role disappears. If you leave for a company that doesn't use AI tools, your writing skills have atrophied. This is the "deskilling" problem — AI augmentation can gradually hollow out human expertise.
Ask about automation, algorithmic management, hiring AI, or workplace ethics.
Seven concepts for the world you're about to enter.
AI replaces tasks, not whole jobs. But when 80% of a job's tasks are automated, the job is restructured — fewer people, higher skill requirements, same pay.
AI assigns tasks, monitors performance, and fires workers. Amazon warehouses, Uber drivers, call centers — managed by algorithm with no human review.
Keystrokes, screenshots, webcam, email, Slack messages — most of it legal in the U.S. The power asymmetry: employers see everything, workers see nothing about how they're judged.
75% of large employers use AI screening. Resume scanners reproduce historical bias. Video interview AI scores facial expressions with no scientific basis. You're judged by algorithm before a human sees your application.
Workers became 65% more productive since 1979. Compensation rose 17%. AI accelerates this gap. The economic gains flow to shareholders, not workers.
Automation's benefits shouldn't come entirely at workers' expense. Retraining, transition income, worker consultation, and shared productivity gains. South Korea's automation tax. NYC's AI hiring audit law.
Workers who depend on AI for core tasks lose the ability to perform those tasks independently. When the AI improves enough to not need you, your skills have already atrophied. AI augmentation can hollow out human expertise.
5 questions drawn from the module. You need 80% to pass.