From Cat Recognition to Existential Risk
The Book That Documents AI's Six Most Insane Years
In 2012, a neural network learned to recognize cats in YouTube videos. It needed 16,000 processors and three days to do it. The result made the front page of the New York Times.
In 2025, a neural network passed the bar exam, wrote production-quality code, diagnosed rare diseases from blurry scans, and composed music that made professional musicians genuinely uncomfortable. It ran on your phone.
The distance between these two moments is thirteen years. But the real story — the one that reshapes every creative brief you’ll ever write, every brand strategy you’ll ever build, every team you’ll ever manage — happened in the last six of those years. And one book tried to capture it while it was still happening.
That book is The Scaling Era: An Oral History of AI, 2019–2025 by Dwarkesh Patel with Gavin Leech. Published by Stripe Press. 248 pages. 170+ technical definitions. Eight thematic chapters. And a cast of characters that reads like the Avengers of artificial intelligence — if the Avengers disagreed on whether they were saving the world or accidentally ending it.
Here’s why you, specifically, should care.
Who made this and why it matters
Dwarkesh Patel is a 27-year-old podcaster. That sentence should make you skeptical. It made me skeptical. But here’s the thing: his podcast guests include Dario Amodei (CEO, Anthropic — the company behind Claude), Demis Hassabis (CEO, Google DeepMind — Nobel Prize winner), Ilya Sutskever (co-founder of OpenAI, now building Safe Superintelligence), Mark Zuckerberg, and Eliezer Yudkowsky, the person who has spent two decades arguing that AI might kill us all.
Jeff Dean — Google’s chief scientist — called the book a great distillation of conversations that help people understand modern AI. Patrick McKenzie posed the only test that matters: would you learn more reading this book than spending the equivalent time using an LLM? His answer was yes.
Gavin Leech, the co-author, has a PhD in AI and co-founded Arb Research. His job was to do what Patel’s interviews alone couldn’t: add 170+ margin definitions, annotate technical claims, fact-check timelines, and thread the excerpts into a coherent narrative.
The publisher is Stripe Press — Patrick Collison’s imprint. The same Collison who built a $95 billion payments company and publishes books about progress the way other billionaires collect yachts.
This lineage matters because it tells you what the book isn’t. It isn’t another “AI Will Change Everything” manifesto written by a consultant who discovered ChatGPT in January 2023. It isn’t a doom pamphlet. It isn’t a how-to guide. It’s an oral history — in the tradition of Studs Terkel, not Kai-Fu Lee. The people who are building the most powerful technology in human history, talking on the record about what they think they’re doing, what they’re afraid of, and what they don’t understand.
The only book of its kind (at least that’s how it feels)
Between 2020 and 2025, dozens of AI books hit the market. Most suffered from the same two problems.
Problem one: they were written by outsiders looking in. Journalists, consultants, futurists — people interpreting the technology through secondhand accounts and press releases. The resulting books read like restaurant reviews written by people who’ve never cooked.
Problem two: they aged like milk. A book finished in March 2023 was outdated by the time it shipped in September. The field moves at a pace that makes traditional publishing look like sending messages by carrier pigeon.
Patel’s book dodges both problems through its format. Oral history doesn’t claim to predict — it captures. When Dario Amodei says he was surprised by how much scaling kept working, that surprise doesn’t expire. When François Chollet argues that LLMs can’t truly reason — only fetch memorized patterns — that argument doesn’t become irrelevant just because GPT-5 launched. These are the frameworks, intuitions, and disagreements of the people building the systems. The thinking behind the technology ages better than any snapshot of the technology itself.
And as of April 2026, no other book has attempted this at this scale with this level of access. There are academic papers. There are podcasts. There are blog posts and Substacks (including, yes, this one). But a curated, structured oral history of the scaling era, with the actual protagonists? This is it.
The Translator’s Chapter Guide: What Each Section Means for You
I read this book with a specific lens: what does each chapter tell someone who builds brands, manages creative teams, or makes product decisions? Here’s the field guide.
Chapter 1: Scaling — “Why your AI budget will grow exponentially and there’s nothing you can do about it”
The core thesis: making AI models bigger, with more data and more compute, produces reliably better results. This is called the “scaling hypothesis,” and it’s the central bet that every major AI lab is making. Billions of dollars riding on a power law.
For creative professionals, the translation is this: the AI tools you’re using today are the worst they’ll ever be. Not metaphorically. Literally. The models get better with more resources, and the resources keep doubling. When Mark Zuckerberg says it’s worth investing $100 billion-plus on the assumption that scaling continues — that’s not a bluff. That’s a budget line item.
The practical implication: stop evaluating AI tools as fixed products. They’re trajectories. The question isn’t “can Midjourney do this today?” It’s “what happens to my competitive position when it can do this in 18 months?”
Chapter 2: Evals — “Nobody agrees on how to measure intelligence, and that’s your problem too”
The most fascinating debate in the book: François Chollet (creator of Keras, senior researcher at Google) arguing with Patel about whether LLMs actually reason or just memorize. Chollet’s position: if you show a model a problem it hasn’t seen before — genuinely novel, not a variation of training data — it fails. LLMs don’t synthesize new solutions. They fetch stored ones.
Patel pushes back: isn’t that what humans do too? We drill math for years before we can “reason” about it.
This debate should be required reading for every creative director. Because the exact same question applies to AI-generated campaigns: is the model being creative, or is it fetching the most statistically probable remix of everything it’s seen? If you’ve read my Creativity Gap research, you know the answer is nuanced. But the evaluation problem is real. When Adobe says “creative AI” and Google says “creative AI,” they may not be measuring the same thing. Chollet’s ARC benchmark — puzzles designed to resist memorization — remains unsolved by any LLM. Children can do them.
Chapter 3: Internals — “The machine is a black box, and the people who built it are the first to admit it”
Trenton Bricken from Anthropic’s interpretability team explains how they’re trying to understand what happens inside a model. The honest answer: they’re only scratching the surface. Mechanistic interpretability — figuring out which “neurons” correspond to which concepts — is the AI equivalent of neuroscience. We built the brain before we understood it.
For anyone making decisions based on AI output — brand guidelines, campaign copy, product recommendations — this chapter is a cold shower. The systems are powerful. They are also, in a fundamental sense, unexplained. When you use an LLM for ideation, you’re collaborating with something nobody fully understands. That’s not a reason to stop. It’s a reason to keep your judgment sharp.
Chapter 4: Safety — “The alignment tax you’re already paying”
This is where the book gets heavy. Carl Shulman calmly discusses the probability of AI takeover scenarios. Yudkowsky argues we haven’t earned enough time. Dario Amodei walks the line between building the most powerful models on Earth and worrying about what they might do.
The creative industry connection is less obvious here, but it’s real. Every AI model you use has been through RLHF — Reinforcement Learning from Human Feedback. That process makes models helpful, harmless, and honest. It also makes them safe. And safe, as I’ve written before, is the enemy of great. Every alignment optimization comes at a diversity cost. The NeurIPS 2025 “Artificial Hivemind” paper shows that even models from different families converge on similar outputs. That’s not a bug in the alignment process. It’s a feature — with side effects that hit the creative industry first.
Chapter 5: Inputs — “The data wall is real, and you’re part of the supply chain”
Dylan Patel from SemiAnalysis and others discuss the looming data wall: we’re running out of high-quality text to train on. The solutions — synthetic data, multimodal training, longer context windows — are all compromises.
For brand owners: your content is training data. Your campaigns, your copy, your visual assets — they’re part of the distribution these models learn from. The quality of AI output is downstream of the quality of human output. This is both a threat (your work gets absorbed without compensation) and a strategic insight (the scarcity of truly original content increases).
Chapter 6: Impact — “What happens to your job, your team, and your budget”
Sholto Douglas from Anthropic describes how even he — a reinforcement learning infrastructure lead — can imagine AI automating most of his tasks within a few years. If the people building AI think their own jobs are at risk, the creative industry shouldn’t assume immunity.
But the nuance matters. The book doesn’t predict mass replacement. It predicts transformation. Zuckerberg talks about inference compute vs. training compute. Hassabis talks about the energy constraints that slow deployment. The intelligence explosion, if it comes, will be bottlenecked by power grids and permitting processes — not just algorithms. This means the transition won’t be instant. But it also won’t be optional.
Chapters 7 & 8: Explosion and Timelines — “How much time do you have?”
The range of predictions in this book is staggering. Some interviewees think AGI arrives by 2028. Others think decades. Carl Shulman puts the probability of catastrophic outcomes at 20-25% on a bad day. The Anthropic timeline of 2028 seemed plausible to several guests at the time of recording.
What’s useful here isn’t the specific dates. It’s the fact that the people closest to the technology disagree wildly about when transformative AI arrives — but almost none of them say “never.” For strategic planning in any industry, that’s the signal. Not when, but that the range of credible timelines is measured in years, not generations.
The expiry date problem (and why it doesn’t matter)
The book was published in October 2025. We’re in April 2026. Claude has shipped new capabilities. GPT-5 happened. The AI landscape has shifted again.
Does that make the book outdated?
No. And this is the key insight about oral history as a format: facts expire, but thinking patterns don’t. When Ilya Sutskever says he believes scaling will keep working but that the link between next-word prediction and reasoning is complicated — that framework for thinking about AI capabilities is still the right one, regardless of which model just launched. When Chollet says generality is not specificity scaled up — that’s a philosophical position, not a product spec. It doesn’t expire with a software update.
The book captures how the people shaping AI think, not just what they’ve built. And for anyone trying to navigate AI’s impact on their work, understanding the thinking is worth more than knowing the latest benchmark scores.
The creative industry’s homework
If you’re a creative director, a brand strategist, a product manager, or anyone who makes decisions about how humans and machines should work together — this book gives you something no AI tutorial, no LinkedIn hot take, and no conference keynote can provide: the unfiltered internal logic of the people building the systems you’re increasingly dependent on.
You’ll understand why the models behave the way they do. You’ll understand why alignment makes them conservative. You’ll understand why scaling creates both opportunity and homogeneity. You’ll understand why nobody — not Amodei, not Hassabis, not Zuckerberg — knows exactly where this is going.
And that understanding is the first step toward the only AI strategy that actually works: treating AI not as a tool to deploy, but as a system to navigate. With your judgment intact.
The book started with neural networks identifying species in blurry photos. It ends with people seriously debating whether superintelligence arrives this decade. Somewhere in between those two endpoints lives the future of every creative brief, every brand, and every team you’ll ever build.
Read it. Then read it again in a year. See which parts aged well. That’s the real test.




