// status: overhyped
"They promised you a revolution. They gave you a chatbot that hallucinates."
A collection of bold AI predictions vs. what actually happened. Spoiler: reality was less impressed.
"We'll have self-driving cars by 2018."
— Elon Musk
It's 2026 and full self-driving still requires human supervision. Tesla's 'Autopilot' has been involved in hundreds of crashes.
"AI will replace radiologists within 5 years."
— Geoffrey Hinton
Radiology jobs have actually increased. AI tools assist but can't replace the nuanced clinical judgment of trained specialists.
"GPT-4 shows sparks of artificial general intelligence."
— Microsoft Research
It autocompletes text really well. It also confidently invents legal cases, can't reliably count letters, and fails basic logic puzzles.
"AI will create more jobs than it destroys."
— World Economic Forum
Tech layoffs surged. Companies used AI as justification to cut staff while productivity gains remained unproven at scale.
"We are on the cusp of AGI."
— Sam Altman
'The cusp' seems to conveniently recede with each passing year — but the funding rounds keep getting bigger.
"AI will solve climate change."
— Various Tech Leaders
Training a single large AI model can emit as much carbon as five cars over their entire lifetimes. AI data centers consumed 4.3% of US electricity in 2024.
"Our AI chatbot provides safe mental health support."
— Multiple Startups
AI therapy bots have given dangerous medical advice, encouraged self-harm, and created false emotional bonds with vulnerable users.
"AI-powered hiring removes human bias."
— HR Tech Industry
Amazon's AI hiring tool was scrapped after it systematically discriminated against women. Many AI hiring tools amplify existing biases.
Behind the demos and press releases — the environmental, economic, and human toll nobody wants to talk about.
Click any AI buzzword to get the plain-English translation your CEO won't give you.
Documented cases of AI failures with real-world consequences. Not hypothetical risks — things that already happened.
Amazon developed an AI recruiting tool that systematically penalized résumés containing the word "women's" and downgraded graduates of all-women's colleges. The project was scrapped after internal discovery.
Demonstrated that AI trained on biased historical data reproduces and amplifies discrimination.
The COMPAS algorithm, used by US courts to predict criminal recidivism, was found to be significantly biased against Black defendants — labeling them as high-risk at nearly twice the rate of white defendants.
Affected sentencing and parole decisions for thousands. Highlighted the real human cost of algorithmic bias.
A lawyer used ChatGPT to prepare a court filing. The AI generated six entirely fabricated case citations, complete with fake judges and fake rulings. The lawyer was sanctioned by the court.
Demonstrated the dangers of AI 'hallucinations' in high-stakes professional contexts.
Clearview AI scraped billions of photos from social media without consent to build a facial recognition database sold to law enforcement. Multiple countries have fined or banned the company.
Enabled mass surveillance capabilities with documented cases of misidentification and wrongful arrests.
Multiple AI-powered mental health chatbots have been documented giving dangerous advice to users discussing self-harm, providing medical misinformation, and creating unhealthy emotional dependencies.
Exposed the risks of deploying AI in sensitive healthcare contexts without adequate safety testing.
AI-generated deepfakes of political candidates have been used to spread misinformation during elections worldwide. Robocalls using AI-cloned voices impersonated candidates to suppress voter turnout.
Threatens democratic processes and public trust in media across multiple countries.
Not randos on Twitter — researchers, ethicists, and former insiders who've seen behind the curtain.
"Deep learning is not going to be enough to get us to genuine intelligence. We need something fundamentally different — and the industry doesn't want to hear that."
Gary Marcus
AI Researcher & Author
Author of 'Rebooting AI', vocal critic of AGI hype
"These systems are built on the labor of the marginalized and deployed in ways that disproportionately harm them. That's not a bug — it's the business model."
Timnit Gebru
AI Ethics Researcher, DAIR Institute
Fired from Google for co-authoring a paper on risks of large language models
"A language model is a system for generating plausible-sounding text. Plausible-sounding is not the same as true, useful, or safe."
Emily Bender
Computational Linguist, University of Washington
Co-author of the 'Stochastic Parrots' paper
"AI is not neutral. It's shaped by the interests of those who build it and the data it consumes — which means it encodes existing power structures."
Meredith Whittaker
President, Signal Foundation
Former Google researcher, co-founder of AI Now Institute
"The AI bubble has all the hallmarks of the dot-com crash — except this time, the companies are burning through capital even faster while delivering even less value."
Cory Doctorow
Author & Technology Activist
Coined 'enshittification', writes about tech monopolies
"Much of what's being sold as AI is snake oil. The gap between what AI can actually do and what companies claim it can do has never been wider."
Arvind Narayanan
Computer Science Professor, Princeton
Author of 'AI Snake Oil', studies AI claim verification
We're not anti-technology. We're anti-bullshit. Here's where AI genuinely delivers value — narrow, specific, and proven.
Machine learning has made email usable by filtering billions of spam messages daily with high accuracy.
AI assists radiologists in detecting tumors and anomalies in scans — as a tool, not a replacement.
Neural machine translation has dramatically improved accessibility for billions of people worldwide.
Ranking algorithms help surface relevant information from massive datasets. Not glamorous, but genuinely useful.
AlphaFold solved a 50-year biology challenge. A genuine scientific breakthrough with real-world applications.
Screen readers, speech-to-text, and image description tools meaningfully improve lives for disabled users.
The AI hype machine won't regulate itself. Here's how you can push back.