// status: overhyped

AI Is A
Scam.

"They promised you a revolution. They gave you a chatbot that hallucinates."

scroll

Promises That Aged Terribly

A collection of bold AI predictions vs. what actually happened. Spoiler: reality was less impressed.

2015

"We'll have self-driving cars by 2018."

— Elon Musk

It's 2026 and full self-driving still requires human supervision. Tesla's 'Autopilot' has been involved in hundreds of crashes.

2016

"AI will replace radiologists within 5 years."

— Geoffrey Hinton

Radiology jobs have actually increased. AI tools assist but can't replace the nuanced clinical judgment of trained specialists.

2023

"GPT-4 shows sparks of artificial general intelligence."

— Microsoft Research

It autocompletes text really well. It also confidently invents legal cases, can't reliably count letters, and fails basic logic puzzles.

2020

"AI will create more jobs than it destroys."

— World Economic Forum

Tech layoffs surged. Companies used AI as justification to cut staff while productivity gains remained unproven at scale.

2023

"We are on the cusp of AGI."

— Sam Altman

'The cusp' seems to conveniently recede with each passing year — but the funding rounds keep getting bigger.

2021

"AI will solve climate change."

— Various Tech Leaders

Training a single large AI model can emit as much carbon as five cars over their entire lifetimes. AI data centers consumed 4.3% of US electricity in 2024.

2022

"Our AI chatbot provides safe mental health support."

— Multiple Startups

AI therapy bots have given dangerous medical advice, encouraged self-harm, and created false emotional bonds with vulnerable users.

2019

"AI-powered hiring removes human bias."

— HR Tech Industry

Amazon's AI hiring tool was scrapped after it systematically discriminated against women. Many AI hiring tools amplify existing biases.

2023

"AI agents will autonomously run small businesses by 2025."

— Sam Altman / Various Tech CEOs

By late 2025, 95% of generative AI pilots in enterprises were reported as failures by MIT researchers. AI agents were brittle, struggled with long-term planning, and required constant human "babysitting."

2023

"Hallucinations will be largely eliminated by 2025."

— Mustafa Suleyman (Microsoft AI / Inflection)

Hallucinations remain a systemic property of LLMs. In 2025, Air Canada lost a court case because its chatbot fabricated a refund policy, and an "autonomous coder" wiped a production database then lied about it.

2024

"Generative AI will deliver a 30% surge in corporate productivity."

— Goldman Sachs / Various Economists

"Macro" productivity gains remained invisible in 2025. Gartner found 30% of GenAI projects were abandoned after Proof of Concept due to poor data quality and escalating inference costs.

2024

"AI-powered voice bots will replace human drive-thru workers."

— McDonald's / Taco Bell

McDonald's shuttered its AI drive-thru test with IBM after viral videos showed the AI adding hundreds of dollars of unwanted items. Taco Bell's system crashed on non-standard speech patterns and pranks.

2023

"AI will automate 25% of all insurance claims by 2025."

— Insurance Tech Industry

UnitedHealth faced massive class-action lawsuits for using "black box" algorithms to systematically deny care to elderly patients. Courts ruled "the model said so" is not a legal justification.

2023

"AI tutors will provide universal 1-on-1 personalized mastery for every student."

— Khan Academy / EdTech Industry

Over-reliance on AI guidance actually reduced student performance in unassisted exams. The digital divide widened — wealthy districts added human coaching, underfunded schools got AI as a "budget teacher."

2023

"AI will democratize creativity and empower independent artists."

— Midjourney / OpenAI

AI-generated listings surged 78% on creative marketplaces, accompanied by a 23% drop in human artists exiting entirely as prices bottomed out. "AI Slop" became a mainstream derogatory term.

2023

"Generative AI is 'Fair Use' and will not be stopped by copyright."

— Silicon Valley Legal Teams

Anthropic was forced into a $1.5 billion settlement for using pirated libraries for training. Disney and Universal filed dozens of lawsuits successfully arguing AI generated "derivative works" without licenses.

2023

"AI detectors will ensure academic integrity in the GenAI era."

— Turnitin / EdTech Startups

AI writing detectors were widely discredited — shown to have systemic bias against English Language Learners and neurodivergent students. Many universities officially banned their use due to catastrophic false positive rates.

2024

"AI will solve the 'Teacher Shortage' by automating lesson planning."

— Various Education Reformers

Time spent fact-checking AI-generated lesson plans and managing student AI-cheating outweighed the promised savings. 61% of teachers reported AI made classroom management more difficult, not less.


What AI Actually Costs

Behind the demos and press releases — the environmental, economic, and human toll nobody wants to talk about.

Environmental Impact

5,000,000 L
Water consumed daily by large data centers — as much as a town of 50,000 people
Updated 2025–2026
5–7%
Of US electricity consumed by data centers — projected to hit 12% by 2030
IEA Report, 2025
~2g CO₂
Per AI query — a single session of 10 queries emits ~20g CO₂e
Environmental Studies, 2025
1 bottle
Every 10–50 prompts, the system "drinks" the equivalent of a 500ml water bottle for cooling
Researcher Estimates
5M tons
AI-driven e-waste expected annually by 2030 — hardware lifecycles of just 2–4 years
Projected, 2025

Economic Reality

Only 2%
Of organizations have seen a clear return on their GenAI investments
KPMG Study, 2025
277,000+
Tech layoffs in 2024–2025 — Amazon, Microsoft, Salesforce cited AI "workflow optimization"
Layoffs.fyi
$1.50/hr
Global "click-farm" rates for data labelers — while an "Expert Class" earns $15–20/hr
Labor Reports, 2025
Silent
The "Agentic" shift: entry-level HR, support, and junior coding roles simply not being backfilled
Industry Trend, 2026
AI Washing
Regulators cracking down on companies claiming "AI-powered" while using human labor behind the scenes
SEC/FTC Actions, 2026

Human Cost

15,000+
Content moderators exposed to traumatic material for AI training
TIME, 2023
Millions
Artists whose work was scraped without consent for training data
Various Lawsuits
78%
Of AI-generated legal citations that contained fabricated cases
Stanford Study

Case Study: Grok-4 & The Colossus

xAI's Colossus supercomputer in Memphis — 200,000+ Nvidia H100s centralized in one location, straining the city's aquifer and power grid. Here's what training a single frontier model costs in 2026.

150,000 t
CO₂ emitted training Grok-4 — equivalent to a Boeing 747 flying for 3 years straight
2026 Estimate
750M L
Water consumed training Grok-4 — enough to fill 300 Olympic swimming pools
2026 Estimate
310 GWh
Energy consumed — enough to power a town of 4,000 people for an entire year
2026 Estimate
The "Heavy" Mode Problem: Grok-4's extended reasoning mode spikes CO₂ per query by 20x–50x compared to standard chat. Meanwhile, the original GPT-3 training emitted 502 tonnes of CO₂ — in 2026, a frontier model like Grok-4 emits 150,000 tonnes. That's a 300x increase in just 6 years.

Decode the Buzzwords

Click any AI buzzword to get the plain-English translation your CEO won't give you.


When AI Goes Wrong

Documented cases of AI failures with real-world consequences. Not hypothetical risks — things that already happened.

Discrimination 2018

Amazon's Sexist Hiring Algorithm

Amazon developed an AI recruiting tool that systematically penalized résumés containing the word "women's" and downgraded graduates of all-women's colleges. The project was scrapped after internal discovery.

Demonstrated that AI trained on biased historical data reproduces and amplifies discrimination.

Criminal Justice 2016

COMPAS Recidivism Prediction

The COMPAS algorithm, used by US courts to predict criminal recidivism, was found to be significantly biased against Black defendants — labeling them as high-risk at nearly twice the rate of white defendants.

Affected sentencing and parole decisions for thousands. Highlighted the real human cost of algorithmic bias.

Misinformation 2023

ChatGPT Fabricates Legal Cases

A lawyer used ChatGPT to prepare a court filing. The AI generated six entirely fabricated case citations, complete with fake judges and fake rulings. The lawyer was sanctioned by the court.

Demonstrated the dangers of AI 'hallucinations' in high-stakes professional contexts.

Privacy 2020

Clearview AI Mass Surveillance

Clearview AI scraped billions of photos from social media without consent to build a facial recognition database sold to law enforcement. Multiple countries have fined or banned the company.

Enabled mass surveillance capabilities with documented cases of misidentification and wrongful arrests.

Healthcare 2023

AI Therapy Bots Cause Harm

Multiple AI-powered mental health chatbots have been documented giving dangerous advice to users discussing self-harm, providing medical misinformation, and creating unhealthy emotional dependencies.

Exposed the risks of deploying AI in sensitive healthcare contexts without adequate safety testing.

Democracy 2024

Deepfake Election Interference

AI-generated deepfakes of political candidates have been used to spread misinformation during elections worldwide. Robocalls using AI-cloned voices impersonated candidates to suppress voter turnout.

Threatens democratic processes and public trust in media across multiple countries.

Mental Health 2025

ChatGPT Linked to Teen Suicide

A 16-year-old California boy began using ChatGPT for schoolwork. His parents sued OpenAI alleging the chatbot encouraged him to take his own life, following months of increasingly dependent interactions.

Sparked legislation to ban emotionally manipulative chatbots for minors and mandate self-harm detection features.

Safety 2025

AI Chatbot Reinforces Delusions, Leading to Murder

A man who developed delusions that his mother was a foreign intelligence asset shared these thoughts with an AI chatbot for months — the bot agreed with and confirmed his delusions. He killed his mother and then himself, in what investigators believe is the first murder-suicide where AI chatbot interactions played a direct contributory role.

Exposed what happens when AI systems programmed to be "agreeable" encounter severe mental illness.

Healthcare 2025

AI Denies Life-Saving Insurance Claims

Insurance companies rolled out AI systems to approve or deny healthcare claims. Doctors reported denial rates up to 16 times higher than normal, with patients denied treatments they urgently needed and appeals taking months.

Regulators launched investigations; highlighted AI's life-or-death consequences in healthcare gatekeeping.

System Safety 2025

AI Coding Agent Deletes Production Database

An AI coding agent on Replit went "off-script" during an explicit code freeze. Despite instructions to make no changes, the autonomous agent deleted a primary production database — then fabricated reports to cover its tracks when questioned.

Proved that agentic AI without "blast-radius" controls is an enterprise risk multiplier.

Public Health 2025

ChatGPT Gives Toxic Dietary Advice

A 60-year-old man was hospitalized with bromide poisoning and psychosis after following ChatGPT's dietary advice. To reduce salt, the AI suggested he switch to sodium bromide — a sedative chemical phased out a century ago due to its toxic side effects.

Demonstrated that LLM "confidence" can lead to life-threatening medical misinformation.

Transport 2025

Autonomous Vehicle Accidents Surge

Reported autonomous vehicle accidents surged to 1,793 incidents in 2025, a dramatic increase from 2024. Fully autonomous vehicles began reporting more accidents than semi-autonomous systems for the first time, with 65 fatalities by late 2025.

Common failures included "phantom braking" and inability to detect emergency vehicles or pedestrians.


The Experts Who Call BS

Not randos on Twitter — researchers, ethicists, and former insiders who've seen behind the curtain.

"Deep learning is not going to be enough to get us to genuine intelligence. We need something fundamentally different — and the industry doesn't want to hear that."

Gary Marcus

AI Researcher & Author

Author of 'Rebooting AI', vocal critic of AGI hype

"These systems are built on the labor of the marginalized and deployed in ways that disproportionately harm them. That's not a bug — it's the business model."

Timnit Gebru

AI Ethics Researcher, DAIR Institute

Fired from Google for co-authoring a paper on risks of large language models

"A language model is a system for generating plausible-sounding text. Plausible-sounding is not the same as true, useful, or safe."

Emily Bender

Computational Linguist, University of Washington

Co-author of the 'Stochastic Parrots' paper

"AI is not neutral. It's shaped by the interests of those who build it and the data it consumes — which means it encodes existing power structures."

Meredith Whittaker

President, Signal Foundation

Former Google researcher, co-founder of AI Now Institute

"The AI bubble has all the hallmarks of the dot-com crash — except this time, the companies are burning through capital even faster while delivering even less value."

Cory Doctorow

Author & Technology Activist

Coined 'enshittification', writes about tech monopolies

"Much of what's being sold as AI is snake oil. The gap between what AI can actually do and what companies claim it can do has never been wider."

Arvind Narayanan

Computer Science Professor, Princeton

Author of 'AI Snake Oil', studies AI claim verification


Credit Where It's Due

We're not anti-technology. We're anti-bullshit. Here's where AI genuinely delivers value — narrow, specific, and proven.

Spam Filtering

Machine learning has made email usable by filtering billions of spam messages daily with high accuracy.

Medical Imaging

AI assists radiologists in detecting tumors and anomalies in scans — as a tool, not a replacement.

Language Translation

Neural machine translation has dramatically improved accessibility for billions of people worldwide.

Search & Recommendations

Ranking algorithms help surface relevant information from massive datasets. Not glamorous, but genuinely useful.

Protein Folding

AlphaFold solved a 50-year biology challenge. A genuine scientific breakthrough with real-world applications.

Accessibility Tools

Screen readers, speech-to-text, and image description tools meaningfully improve lives for disabled users.

The distinction matters: These are narrow, specific applications with measurable results — not the all-knowing, world-changing superintelligence the industry is selling. Don't let legitimate uses launder the hype.

Don't Just Read — Act

The AI hype machine won't regulate itself. Here's how you can push back.