The Quiet Extinction
We gave AI the apprentice seat. We forgot to save one for the humans.
Jeff Hawkins has a bleak way of putting our moment in perspective. In A Thousand Brains, he points out that every civilization is ephemeral — on the scale of universal time, the interval between a civilization’s invention of electromagnetic communication and its extinction is like the flash of a firefly. We appear, briefly, and we vanish. The only thing that outlasts us is what we successfully transmitted.
For the first time in human history, we have two kinds of heirs to teach. Biological ones — children, apprentices, the next generation of people who will carry what we built into a future we won’t see. And now digital ones: AI systems that are, in a very real sense, inheriting our accumulated knowledge, our judgment, our institutional memory.
We’ve spent thousands of years developing sophisticated machinery for teaching biological heirs. We invented schools, apprenticeships, residencies, and mentorship. We understood, at a bone level, that knowledge transmission requires time, struggle, failure, and consequence. You cannot hand someone a textbook and call them a surgeon.
But here’s the point - We are not thinking nearly carefully enough about what we’re teaching our digital heirs. And we are actively dismantling the machinery for teaching our biological ones — at the exact moment we need both.
The Real Leverage - The Twenty Years Spent Behind the Tool
Here are some personal data points. Right now, with Claude Code, I can produce what would have taken me and ten engineers working together. I have twenty years behind me — ML teams at Google running fifty experiments a quarter against a ten-billion-dollar revenue base, CTO of ThoughtSpot scaling to a four-and-a-half-billion-dollar valuation, and now building AmpUp, a sales brain that learns from every customer interaction. With Claude Code, I am thinking harder than I ever have and producing more than I ever could. The leverage AI gives me is real and it is extraordinary.
That is not a brag. It is the setup for the most uncomfortable question I know how to ask.
The tool is powerful in my hands because of everything that came before it. Twenty years of experiments - some successes like spotting the bug before it reached production, closing deals by multi-threading, building architecture that scaled and just as many failures: wrong decisions, models that broke in production, deals I should not have lost, products I should not have built. Twenty years of taking chances, some that paid off, many that didn’t. The AI amplifies what is already there. Every experiment I have run, every risk I have taken, every success and failure I have lived through, that is the substrate the tool is operating on.
Take away the substrate and you do not get ten-engineer productivity. You get something that moves fast and breaks things in ways you do not notice until it is expensive.
So the question is not how powerful this tool can make a senior person. We are answering that in real time. The question is: are you building any path for someone to accumulate twenty years?
The answer, right now, is almost nobody and the numbers are starting to show it.
PwC’s internal slides show entry-level audit hiring falling 39% by 2028. Across all Big 4s, graduate job postings are down 44% year over year. In Big Tech, entry-level roles have been cut in half compared to pre-pandemic levels. Unemployment among recent college graduates has risen 50% since 2022 — while overall youth unemployment held flat.
The jobs did not disappear. They were never created. If one senior person with AI does what eleven used to do, you do not need ten juniors to support them. No one making these decisions is irrational — they are optimizing the metrics that exist. It is locally rational, company by company, quarter by quarter. It is collectively catastrophic, and we are deep into it.
The Vanishing Apprentice: The Junior Wasn’t Cheap Labor. They Were the Curriculum.
Here is what gets lost in the efficiency calculation. The junior was not a cost to be optimized. The junior was a learning organism in a critical developmental window and the work they did, messy and imperfect as it was, served a purpose that had nothing to do with output.
The bad calls, the lost deals, the code that does not scale, the architecture decision that haunts a codebase for three years — that is not waste. That is the curriculum. Senior salespeople got good by being terrible junior salespeople first. Senior engineers got good by writing bad code for three years and having someone senior explain exactly why it was bad. Senior physicians got good by doing residencies, by seeing ten thousand patients under supervision, by making the hard call at 3am and being wrong sometimes and understanding why.
There is no other path we have ever found. Every apprenticeship model in every field requiring deep judgment is built on the same foundation: you must be allowed to fail consequentially, under guidance, long enough to develop instinct.
Instinct is not something you can download. It is something you build, slowly, through the specific kind of suffering that comes from caring about outcomes and getting them wrong.
If you think this is a sales and engineering problem, consider medicine. The oldest apprenticeship model we have — the one we made into law because we understood it was non-negotiable. Now imagine AI handles the diagnostic work that residents currently do. The pattern matching, the differential diagnosis, the routine interpretation. It may be better than a tired resident at 3am. Malpractice exposure goes down. Efficiency goes up. Run that logic for a decade. One day you need judgment — real judgment, built from ten thousand hours of supervised failure — and you look around the room, and the room is full of people who never got the reps.
But some are taking note. And what they’re doing is worth paying attention to.
The Founder Who Said No and Built His Own Rules
A founder I know banned new grads from using AI for coding for their first three months. No exceptions. My first instinct was that it sounded reactionary — the kind of rule that comes from someone who learned on hard mode and wants everyone else to suffer the same way out of nostalgia for suffering.
Then I thought about it.
He was protecting the period where you are supposed to struggle, make errors and learn from it. When you go to the patient’s room ten times, look at the chart with a different lens, until it all clicks as one cohesive story and the diagnosis clears. Where you write the bad code and a senior engineer explains exactly why it is bad and you feel the specific embarrassment that means you will never make that mistake again.
That friction is not inefficiency. It is the actual learning, the curriculum. He felt it before he could fully articulate it, and he built a rule around the feeling.
And to be clear: AI can explain, suggest, and critique. It is a remarkable tutor in many ways. But without stakes — without a world that pushes back and lets things die — the learning stays brittle. Understanding without consequence does not produce instinct. It produces confidence without depth, which is in some ways more dangerous than ignorance.
What Else Gets Lost When the Junior Leaves the Room
There is a second problem hiding inside the first one, subtler and in some ways more dangerous.
Junior people were not just learning. They were bringing a perspective the organization hadn’t yet homogenized. The analyst who asks the question nobody else asked — not because they are trying to be clever, but because they genuinely don’t know it’s not supposed to be asked. The SDR who tries the unconventional pitch because they haven’t learned the conventions yet. The engineer who builds it the wrong way because nobody told them the right way, and occasionally the wrong way turns out to be better.
The junior’s outsider perspective — still forming, not yet shaped by institutional gravity — is something you cannot simulate by adjusting a temperature parameter. And you are eliminating it at exactly the moment you most need people who can see what the system cannot see about itself.
And here is the inversion that should keep you up at night: the way AI gets smarter is exactly the way humans get smarter. Exposure to real problems, action, consequence, correction. which means AI is currently sitting in the apprentice seat. Doing the entry-level work. Accumulating the reps. Getting better every quarter. The human junior is watching from outside the building, updating their LinkedIn, wondering what they did wrong.
In reality, they did not do anything wrong. The seat was quietly reassigned while everyone was looking at something else.
The Common Thread: AI and the Apprentice Learn Exactly the Same Way
There is a principle underneath all of this that applies equally to humans and to AI, and it is the one I keep returning to.
Learning requires consequence. Not information. Not access. Consequence.
But consequence does not mean pain, it is that learning requires feedback your mental model did not predict. The junior engineer who ships the feature and watches it break under real load learns something no amount of code review could have taught them — not because it hurt, but because the gap between their mental model and reality was suddenly, undeniably visible. That is the consequence that matters. It is epistemic, not emotional.
AlphaGo did not get good by reading about Go. It played millions of games against itself, in environments where moves produced outcomes and outcomes updated the model. Your junior SDR does not get good from sales methodology PDFs. They get good from losing deals and sitting with the question of why — because the loss surfaces the assumption they did not know they were making.
This is equally true for the AI models that are actually getting better. The models making the most interesting progress are not the ones with more data. They are the ones in real environments — terminal bench, strategy games, physics simulators — taking actions, observing consequences, updating. That is not a coincidence. The architecture of learning is the same whether the learner is biological or digital.
Both broken apprenticeships have the same root and the same solution.
What Founders Can Actually Do: Moving the Curriculum, Not Losing It
The solution is further along than most people realize —The curriculum does not have to disappear. It has to move. And the tools to move it already exist.
Build deliberate simulation environments. AI roleplay for sales training was the crude first version — stiff, unconvincing, easy to game. But the technology has moved fast. A real simulator generates varied, adversarial, realistic scenarios; gives the learner an actual action space where choices matter; produces consequences(the deal dies, the system breaks, the stakeholder turns hostile)and delivers a structured debrief so you understand not just what happened but what you missed and why.
You can now build that for the entire buying committee behind the closed doors you will never actually be in. The CFO who always kills it on ROI. The champion who goes dark in week three for reasons you will not understand until it is too late. The procurement officer who appears in the final hour with requirements nobody mentioned. The competitor who shows up sideways in the last conversation before the decision. You can give a junior rep five hundred consequential reps before they ever touch a real account. The same logic applies to engineering: simulating production incidents, performance constraints, feature requests with hidden tradeoffs, systems that work locally and fail at scale.
The instinct that used to take three years of real experience can start forming in months of deliberate simulation. Fidelity is everything: A simulator that’s too easy just teaches you to win easy simulations. Make it uncomfortable. Make it honest. This is what we are doing at AmpUp for sellers.
But simulation alone is not enough. We also need structural commitments: protected junior roles that exist specifically for development, not just output. Funded apprenticeships. Policy incentives that make it rational for companies to invest in the next generation even when the quarterly math says not to. The curriculum can move into simulation, but it cannot live there entirely. At some point, the reps have to be real, and someone has to be willing to absorb the cost of letting a junior be junior.
Capture the junior’s arc, not just the seniors’ wisdom. The workflows being automated right now—and the judgment being encoded into your prompts and processes—come almost entirely from your most senior people. The junior’s arc isn’t making it in—not into your systems, not into your processes, not into the organizational layer that will outlast any individual.
Think of it like this: a child raised only on the wisdom of elders, never allowed to fall down and figure out how to get up, does not actually inherit the full culture. It inherits a curated version of it—confident, capable within the known distribution, and missing something it does not know it is missing: the unknown unknowns.
That is the internal AI layer that most companies are currently building. Unlike a foundation model trained on the breadth of human output, your internal systems will only ever know what your people taught them. Right now, that represents only a thin slice of what your people actually know.
What We’re Leaving Behind and What We Are Actually Inheriting
Hawkins worried about what AI would carry forward after we are gone. I am worried about what we are teaching right now and what we are forgetting to teach the humans alongside it.
If AI is genuinely the vessel for what we have built — carrying our accumulated knowledge past the firefly flash of our civilization — then what we put in the vessel is everything. A digital heir that learned by doing, by consequence, by failure and recovery, by the full arc of how expertise actually develops, carries something richer and more durable than one that learned only from what the last generation wrote down when they were already wise. The same is true of the human heirs working beside it.
Simulation is part of the answer. Building real consequence environments for both humans and AI deliberately, at scale, as a design priority — is work that needs to start now and will take a decade to get right. But it is not the whole answer, and the people who need to be working on this problem are not yet working on it with anywhere near enough urgency.
We need more minds on this. Not just AI researchers. Not just HR leaders. Founders, educators, policymakers, anyone who understands that the arc from junior to senior to the kind of senior who builds the next generation — that arc is not optional infrastructure. It is the infrastructure. And it needs to start earlier than we think, in the formative years of education, empowering kids to learn faster in consequence-rich, engaging environments that build instinct for the real world without the real world’s costs
The future is not just being automated. It is being inherited. And right now we are not leaving much worth inheriting.
We gave AI the apprentice seat. The human junior is still outside the building. We need to bring them back in, before there is no one left who remembers what the seat was for.


