AI Is Even More Biased Than We Are: Mahzarin Banaji on the Disturbing Truth Behind LLMs
This week I sat down with the woman who permanently rewired my understanding of human nature — and now she’s turning her attention to the nature of the machines we’ve gone crazy for.Harvard psychologist Mahzarin Banaji coined the term “implicit bias” and has conducted research for decades into the blind spots we don’t admit even to ourselves. The work that blew my hair back shows how prejudice has and hasn’t changed since 2007. Take one of the tests here — I was deeply disappointed by my results. More recently, she’s been running new experiments on today’s large language models.What has she learned?They’re far more biased than humans.Sometimes twice or three times as biased.They show shocking behavior — like a model declaring “I am a white male” or demonstrating literal self-love toward its own company. And as their most raw and objectionable responses are papered over, our ability to understand just how prejudiced they really are is being whitewashed, she says.In this conversation, Banaji explains:Why LLMs amplify bias instead of neutralizing itHow guardrails and “alignment” may hide what the model really thinksWhy kids, judges, doctors, and lonely users are uniquely exposedHow these systems form a narrowing “artificial hive mind”And why we may not be mature enough to automate judgement at allBanaji is working at the very cutting edge of the science, and delivers a clear and unsettling picture of what AI is amplifying in our minds.00:00 — AI Will Warp Our DecisionsBanaji on why future decision-making may “suck” if we trust biased systems. 01:20 — The Woman Who Changed How We Think About BiasJake introduces Banaji’s life’s work charting the hidden prejudices wired into all of us. 03:00 — When Internet Language Revealed Human BiasHow early word-embedding research mirrored decades of psychological findings.05:30 — AI Learns the One-Drop RuleCLIP models absorb racial logic humans barely admit. 07:00 — The Moment GPT Said “I Am a White Male”Banaji recounts the shocking early answer that launched her LLM research. 10:00 — The Rise of Guardrails… and the Disappearance of HonestyWhy the cleaned-up versions of models may tell us less about their true thinking.12:00 — What “Alignment” Gets Fatally WrongThe Silicon Valley fantasy of “universal human values” collides with actual psychology.15:00 — When AI Corrects Itself in Stupid WaysThe Gemini fiasco, and why “fixing” bias often produces fresh distortions.17:00 — Should We Even Build AGI?Banaji on why specialized models may be safer than one general mind.19:00 — Can We Automate Judgment When We Don’t Know Ourselves?The paradox at the heart of AI development.21:00 — Machines Can Be Manipulated Just Like HumansCialdini’s persuasion principles work frighteningly well on LLMs. 23:00 — Why AI Seems So Trustworthy (and Why That’s Dangerous)The credibility illusion baked into every polished chatbot.25:00 — The Discovery of Machine “Self-Love”How models prefer themselves, their creators, and their own CEOs. 28:00 — The Hidden Line of Code That Made It All Make SenseWhat changes when a model is told its own name. 31:00 — Artificial Hive Mind: What 70 LLMs Have in CommonThe narrowing of creativity across models and why it matters.34:00 — Why LLM Bias Is More Extreme Than Human BiasBanaji explains effect sizes that blow past anything seen in psychology. 37:00 — A Global Problem: From U.S. Race Bias to India’s Caste BiasHow Western-built models export prejudice worldwide.40:00 — The Loan Officer Problem: When “Truth to the Data” Is ImmoralA real-world example of why bias-blind AI is dangerous. 43:00 — Bayesian Hypocrisy: Humans Do It… and AI Does It MoreModels replicate our irrational judgments — just with sharper edges. 48:00 — Are We Mature Enough to Hand Off Our Thinking?Banaji on the risks of relying on a mind we didn’t design and barely understand.50:00 — The Big Question: Can AI Ever Make Us More Rational?
--------
1:06:30
--------
1:06:30
Australia Just Rebooted Childhood — And the World Is Watching
Australia just imposed a blanket ban on social media for kids under the age of 16. It’s not just the strictest tech policy of any democracy — it’s stricter than China’s laws. No TikTok, no Instagram, no SnapChat, that’s it. And while Washington dithers behind a 1998 law written before Google existed, other countries are gearing up to copy Australia’s homework (Malaysia imposes a similar ban on January 1st). What happens now — the enforcement mess, the global backlash, the accidental creation of the largest clean “control group” in tech-history — could reshape how we think about childhood, mental health, and what governments owe the developing brain.00:00 — Australia’s historic under-16 social-media ban01:10 — What counts as “social media” under the law?02:04 — Why platforms — not kids — get fined03:01 — How the U.S. is still stuck with COPPA (from 1998!)04:28 — Why age 13 was always a fiction05:15 — Psychologists on the teenage brain: “all gas, no brakes”07:02 — Malaysia and the EU consider following Australia’s lead08:00 — Nighttime curfews and other global experiments09:11 — Albanese’s pitch: reclaiming “a real childhood”10:20 — Could isolation leave Aussie teens behind socially?11:22 — Why Australia is suddenly stricter than China12:40 — Age-verification chaos: the AI that thinks my uncle is 1213:40 — The enforcement black box14:10 — Australia as the first real longitudinal control group15:18 — If mental-health outcomes improve, everything changes16:05 — The end of the “wild west” era of social platforms?
--------
9:30
--------
9:30
AI is Creating a ‘Hive Mind' — Scientists Just Proved It
The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year. NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat. A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. Yay!
--------
11:54
--------
11:54
OpenAI Declares "Code Red" — And Takes Aim at Your Brain
According to the Wall Street Journal, Sam Altman sent an internal memo on Monday declaring a company-wide emergency and presumably ruining the holiday wind-down hopes of his faithful employees. OpenAI is hitting pause on advertising plans, delaying AI agents for health and shopping, and shelving a personal assistant called “Pulse.” All hands are being pulled back to one mission: making ChatGPT feel more personal, more intuitive, and more essential to your daily life.The company says it wants the general quality, intelligence, and flexibility to improve, but I’d argue this is less about making the chatbot smarter, and more about making it stickier.Google’s Gemini has been surging — monthly active users jumped from 450 million in July to 650 million in October. Industry leaders like Salesforce CEO Marc Benioff are calling it the best LLM on the market. OpenAI seems to feel the heat, and also seems to feel it doesn’t have the resources to keep building everything it wants all at once — it has to prioritize. Consider that when Altman was recently asked on a podcast how he plans to get to profitability, he grew exasperated. “Enough,” he said.But here’s what struck me about the Code Red. While Gemini is supposedly surpassing ChatGPT in industry benchmarkes, I don’t think Altman is chasing benchmarks. He’s chasing the “toothbrush rule” — the Google standard for greenlighting new products that says a product needs to become an essential habit used at least three times a day. The memo specifically emphasizes “personalization features.” They want ChatGPT to feel like it knows you, so that you feel known, and can’t stop coming back to it.I’ve been talking about AI distortion — the strange way these systems make us feel a genuine connection to what is, ultimately, a statistical pattern generator. That feeling isn’t a bug. It’s becoming the business model.Facebook did this. Google did this. Now OpenAI is doing it: delaying monetization until the product is so woven into your life that you can’t imagine pulling away. Only then do the ads come.Meanwhile, we’re living in a world where journalists have to call experts to verify whether a photo of Trump fellating Bill Clinton is real or AI-generated. The image generators keep getting better, the user numbers keep climbing, and the guardrails remain an afterthought.This is the AI industry in December 2025: a race to become indispensable.
--------
12:21
--------
12:21
Trump’s New Big Tech Era, TSMC’s Shift, and the A.I. Conferences Steering 2026
It’s Monday, December 1st. I’m not a turkey guy, and I’m of the opinion that we’ve all made a terrible habit of subjecting ourselves to the one and only time anyone cooks the damn thing each year. So I hope you had an excellent alternative protein in addition to that one. Ours was the Nobu miso-marinated black cod. Unreal.Okay, after the food comes the A.I. hangover. This week I’m looking at three fronts where the future of technology just lurched in a very particular direction: politics, geopolitics, and the weird church council that is the A.I. conference circuit.First, the politics. Trump’s leaked executive order to wipe out state A.I. laws seems to have stalled — not because he’s suddenly discovered restraint, but maybe because the polling suggests that killing A.I. regulation is radioactive. Instead, the effort is being shoved into Congress via the National Defense Authorization Act, the “must-pass” budget bill where bad ideas go to hide. Pair that with the Federal Trade Commission getting its teeth kicked in by Meta in court, and you can feel the end of the Biden-era regulatory moment and the start of a very different chapter: a government that treats Big Tech less as something to govern and more as something to protect.Second, the geopolitics. TSMC’s CEO is now openly talking about expanding chip manufacturing outside Taiwan. That sounds like a business strategy, but it’s really a tectonic shift. For years, America’s commitment to Taiwan has been tied directly to that island’s role as our chip lifeline. If TSMC starts building more of that capacity in Arizona and elsewhere, the risk calculus around a Chinese move on Taiwan changes — and so does the fragility of the supply chain that A.I. sits on top of.Finally, the quiet councils of the faithful: AWS re:Invent and NeurIPS. Amazon is under pressure to prove that all this spending on compute actually makes money. NeurIPS, meanwhile, is where the people who build the models go to decide what counts as progress: more efficient inference, new architectures, new “alignment” tricks. A single talk or paper at that conference can set the tone for years of insanely expensive work. So between Trump’s maneuvers, the FTC’s loss, TSMC’s hedging, and the A.I. priesthood gathering in one place, the past week and this one are a pretty good snapshot of who really steers the current we’re all in.
The Rip Current covers the big, invisible forces carrying us out to sea, from tech to politics to greed to beauty to culture to human weirdness. The currents are strong, but with a little practice we can learn to spot them from the beach, and get across them safely.
Veteran journalist Jacob Ward has covered technology, science and business for NBC News, CNN, PBS, and Al Jazeera. He's written for The New Yorker, The New York Times Magazine, Wired, and is the former Editor in Chief of Popular Science magazine.