// status: overhyped

AI Is A
Scam.

"They promised you a revolution. They gave you a chatbot that hallucinates."

scroll

Promises That Aged Terribly

A collection of bold AI predictions vs. what actually happened. Spoiler: reality was less impressed.

2015

"We'll have self-driving cars by 2018."

— Elon Musk

It's 2026 and full self-driving still requires human supervision. Tesla's 'Autopilot' has been involved in hundreds of crashes.

2016

"AI will replace radiologists within 5 years."

— Geoffrey Hinton

Radiology jobs have actually increased. AI tools assist but can't replace the nuanced clinical judgment of trained specialists.

2023

"GPT-4 shows sparks of artificial general intelligence."

— Microsoft Research

It autocompletes text really well. It also confidently invents legal cases, can't reliably count letters, and fails basic logic puzzles.

2020

"AI will create more jobs than it destroys."

— World Economic Forum

Tech layoffs surged. Companies used AI as justification to cut staff while productivity gains remained unproven at scale.

2023

"We are on the cusp of AGI."

— Sam Altman

'The cusp' seems to conveniently recede with each passing year — but the funding rounds keep getting bigger.

2021

"AI will solve climate change."

— Various Tech Leaders

Training a single large AI model can emit as much carbon as five cars over their entire lifetimes. AI data centers consumed 4.3% of US electricity in 2024.

2022

"Our AI chatbot provides safe mental health support."

— Multiple Startups

AI therapy bots have given dangerous medical advice, encouraged self-harm, and created false emotional bonds with vulnerable users.

2019

"AI-powered hiring removes human bias."

— HR Tech Industry

Amazon's AI hiring tool was scrapped after it systematically discriminated against women. Many AI hiring tools amplify existing biases.


What AI Actually Costs

Behind the demos and press releases — the environmental, economic, and human toll nobody wants to talk about.

🌍

Environmental Impact

700,000 L
Water used daily by a single large data center
AP News, 2024
4.3%
Of US electricity consumed by data centers in 2024
IEA Report
626,000 t
CO₂ emitted by training GPT-4 (estimated lifecycle)
Stanford HAI
💰

Economic Reality

$1T+
Invested in AI with returns still largely unproven
Bloomberg, 2025
260,000+
Tech layoffs in 2023–2024, many citing AI optimization
Layoffs.fyi
$2/hr
Average pay for data labelers training AI models
TIME Investigation
👤

Human Cost

15,000+
Content moderators exposed to traumatic material for AI training
TIME, 2023
Millions
Artists whose work was scraped without consent for training data
Various Lawsuits
78%
Of AI-generated legal citations that contained fabricated cases
Stanford Study

Decode the Buzzwords

Click any AI buzzword to get the plain-English translation your CEO won't give you.


When AI Goes Wrong

Documented cases of AI failures with real-world consequences. Not hypothetical risks — things that already happened.

Discrimination 2018

Amazon's Sexist Hiring Algorithm

Amazon developed an AI recruiting tool that systematically penalized résumés containing the word "women's" and downgraded graduates of all-women's colleges. The project was scrapped after internal discovery.

Demonstrated that AI trained on biased historical data reproduces and amplifies discrimination.

Criminal Justice 2016

COMPAS Recidivism Prediction

The COMPAS algorithm, used by US courts to predict criminal recidivism, was found to be significantly biased against Black defendants — labeling them as high-risk at nearly twice the rate of white defendants.

Affected sentencing and parole decisions for thousands. Highlighted the real human cost of algorithmic bias.

Misinformation 2023

ChatGPT Fabricates Legal Cases

A lawyer used ChatGPT to prepare a court filing. The AI generated six entirely fabricated case citations, complete with fake judges and fake rulings. The lawyer was sanctioned by the court.

Demonstrated the dangers of AI 'hallucinations' in high-stakes professional contexts.

Privacy 2020

Clearview AI Mass Surveillance

Clearview AI scraped billions of photos from social media without consent to build a facial recognition database sold to law enforcement. Multiple countries have fined or banned the company.

Enabled mass surveillance capabilities with documented cases of misidentification and wrongful arrests.

Healthcare 2023

AI Therapy Bots Cause Harm

Multiple AI-powered mental health chatbots have been documented giving dangerous advice to users discussing self-harm, providing medical misinformation, and creating unhealthy emotional dependencies.

Exposed the risks of deploying AI in sensitive healthcare contexts without adequate safety testing.

Democracy 2024

Deepfake Election Interference

AI-generated deepfakes of political candidates have been used to spread misinformation during elections worldwide. Robocalls using AI-cloned voices impersonated candidates to suppress voter turnout.

Threatens democratic processes and public trust in media across multiple countries.


The Experts Who Call BS

Not randos on Twitter — researchers, ethicists, and former insiders who've seen behind the curtain.

"Deep learning is not going to be enough to get us to genuine intelligence. We need something fundamentally different — and the industry doesn't want to hear that."

Gary Marcus

AI Researcher & Author

Author of 'Rebooting AI', vocal critic of AGI hype

"These systems are built on the labor of the marginalized and deployed in ways that disproportionately harm them. That's not a bug — it's the business model."

Timnit Gebru

AI Ethics Researcher, DAIR Institute

Fired from Google for co-authoring a paper on risks of large language models

"A language model is a system for generating plausible-sounding text. Plausible-sounding is not the same as true, useful, or safe."

Emily Bender

Computational Linguist, University of Washington

Co-author of the 'Stochastic Parrots' paper

"AI is not neutral. It's shaped by the interests of those who build it and the data it consumes — which means it encodes existing power structures."

Meredith Whittaker

President, Signal Foundation

Former Google researcher, co-founder of AI Now Institute

"The AI bubble has all the hallmarks of the dot-com crash — except this time, the companies are burning through capital even faster while delivering even less value."

Cory Doctorow

Author & Technology Activist

Coined 'enshittification', writes about tech monopolies

"Much of what's being sold as AI is snake oil. The gap between what AI can actually do and what companies claim it can do has never been wider."

Arvind Narayanan

Computer Science Professor, Princeton

Author of 'AI Snake Oil', studies AI claim verification


Credit Where It's Due

We're not anti-technology. We're anti-bullshit. Here's where AI genuinely delivers value — narrow, specific, and proven.

📧

Spam Filtering

Machine learning has made email usable by filtering billions of spam messages daily with high accuracy.

🏥

Medical Imaging

AI assists radiologists in detecting tumors and anomalies in scans — as a tool, not a replacement.

🌐

Language Translation

Neural machine translation has dramatically improved accessibility for billions of people worldwide.

🔎

Search & Recommendations

Ranking algorithms help surface relevant information from massive datasets. Not glamorous, but genuinely useful.

🧬

Protein Folding

AlphaFold solved a 50-year biology challenge. A genuine scientific breakthrough with real-world applications.

Accessibility Tools

Screen readers, speech-to-text, and image description tools meaningfully improve lives for disabled users.

⚠️ The distinction matters: These are narrow, specific applications with measurable results — not the all-knowing, world-changing superintelligence the industry is selling. Don't let legitimate uses launder the hype.

Don't Just Read — Act

The AI hype machine won't regulate itself. Here's how you can push back.