<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Amit’s Substack]]></title><description><![CDATA[Here I write as the CEO of AmpUp, thoughts observations, new learnings.]]></description><link>https://amit.ampup.ai</link><generator>Substack</generator><lastBuildDate>Thu, 14 May 2026 21:30:04 GMT</lastBuildDate><atom:link href="https://amit.ampup.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Amit Prakash]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ampuphq@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ampuphq@substack.com]]></itunes:email><itunes:name><![CDATA[Amit Prakash]]></itunes:name></itunes:owner><itunes:author><![CDATA[Amit Prakash]]></itunes:author><googleplay:owner><![CDATA[ampuphq@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ampuphq@substack.com]]></googleplay:email><googleplay:author><![CDATA[Amit Prakash]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[How LLMs Went From Chatbot to Coworker: Tools, Tests, and Robots]]></title><description><![CDATA[Turns out, even for the LLM models, the best way to learn a job is to actually do it]]></description><link>https://amit.ampup.ai/p/how-llms-went-from-chatbot-to-coworker</link><guid isPermaLink="false">https://amit.ampup.ai/p/how-llms-went-from-chatbot-to-coworker</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Sat, 28 Mar 2026 17:13:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f19795d4-d1f8-4a2f-8301-f4a619d0d443_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <a href="https://amit.ampup.ai/p/how-llms-keep-getting-smarter-the">Part 1</a>, I tried to explain why AI models keep getting smarter even though the internet hasn&#8217;t suddenly gotten better. The short answer was feedback: human preferences, expert judgment, and carefully designed rubrics have become more important than the raw internet text</p><p>But all of that was still about shaping conversation. The model reads a prompt, generates an answer, and someone (human or system) says good or bad.</p><p>Now I want to describe the next leap, and this post is about that. It is about what happens when you take the model out of that box and put it somewhere it can actually do things. Run commands. Click buttons. Fail at tasks and try again. Because here&#8217;s the thing nobody says clearly enough:</p><blockquote><p><em><strong>The moment a model can take actions, you can train it on experience.</strong></em></p></blockquote><p>That&#8217;s a different kind of signal entirely. And it&#8217;s where the next wave of improvement is coming from.The big picture is actually pretty intuitive once you see it. So let me try to lay it out.</p><div><hr></div><h2>Why environments matter more than prompts</h2><p>A text-only chatbot is trained on what good answers look like. You show it examples of good writing, good explanations, good code, and it learns to produce more of the same. The feedback is essentially aesthetic &#8212; does this look right? Does it sound right?</p><p>An agent in an environment is trained on what success looks like. It can run a command and see if the command worked. It can edit a file, rerun the tests, and find out whether the bug is actually fixed. The feedback isn&#8217;t aesthetic anymore, it&#8217;s functional. Did the thing actually work?</p><p>That distinction is everything. And precisely the reason why the AI industry is suddenly obsessed with containerized benchmarks and tool-use evaluations. Not because benchmarks are fun, but because testable environments are training substrates. They produce exactly the &#8220;how to do things&#8221; data that the internet never had a reason to create.</p><p>Let me walk you through a few concrete examples, because this is one of those things that&#8217;s much easier to understand with specifics.</p><h2>Terminal tasks: real work, packaged and testable</h2><p>If you want to see what the cutting edge of agent evaluation looks like, Terminal-Bench 2.0 is a good place to start. It&#8217;s a benchmark of <a href="https://arxiv.org/abs/2601.11868">89 tasks set in real terminal environments</a>, inspired by the kind of work actual engineers and sysadmins do. Set up a server, configure a database, debug a networking issue, that sort of thing.</p><p>What makes it interesting for our story is that each task comes with <a href="https://arxiv.org/abs/2601.11868">comprehensive tests to verify the final system state</a>. The grading is automated. Either the server is configured correctly or it isn&#8217;t. Either DNS resolves or it doesn&#8217;t.</p><p><a href="https://snorkel.ai/research-paper/terminal-bench-benchmarking-agents-on-hard-realistic-tasks-in-command-line-interfaces/">Frontier models and agents scored under 65% overall</a> on these tasks, which tells you we&#8217;re not in &#8220;solved problem&#8221; territory. And here&#8217;s a number that still makes me do a double-take: in extreme cases, <a href="https://arxiv.org/html/2601.11868v1">agents ran for up to two hours, made hundreds of model calls, and burned through nearly 100 million tokens</a> on a single task.</p><p>A hundred million tokens on one task. That&#8217;s not &#8220;write a nice answer.&#8221; That&#8217;s &#8220;keep going until the job is actually done, or give up trying.&#8221; The long-horizon, keep-banging-on-it quality of the work is what makes it both hard and useful as a training signal.</p><h2>Code tasks: unit tests as the reward function</h2><p>If terminal tasks are &#8220;ops work,&#8221; SWE-bench is &#8220;engineering.&#8221;</p><p>The idea behind <a href="https://openai.com/index/introducing-swe-bench-verified/">SWE-bench Verified</a> is beautifully simple: take real GitHub issues from real open-source projects, with real test suites, and ask the model to fix them. <a href="https://www.swebench.com/SWE-bench/faq/">The benchmark includes 2,294 instances in the full set and 500 in the human-validated &#8220;Verified&#8221; subset.</a></p><p>Why does this matter for the &#8220;getting smarter&#8221; story? Because unit tests are an automatic reward function. The model generates a patch. The tests run. If they pass, that trajectory gets reinforced. If they fail, they get discouraged. No human needed in the loop.</p><p>This is how AI coding starts to look less like generating text and more like training a policy. The model isn&#8217;t just producing code that looks plausible. It&#8217;s producing code that works, and learning from the differences.</p><h2>Web tasks: harder than you&#8217;d think</h2><p>At this point you might assume that if a model can use a terminal and write code, surely it can navigate a website. Click some buttons, fill out some forms, and find some information. Not so fast.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X4hF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X4hF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 424w, https://substackcdn.com/image/fetch/$s_!X4hF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 848w, https://substackcdn.com/image/fetch/$s_!X4hF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 1272w, https://substackcdn.com/image/fetch/$s_!X4hF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X4hF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png" width="764" height="240" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:240,&quot;width&quot;:764,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:34737,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/192394365?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X4hF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 424w, https://substackcdn.com/image/fetch/$s_!X4hF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 848w, https://substackcdn.com/image/fetch/$s_!X4hF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 1272w, https://substackcdn.com/image/fetch/$s_!X4hF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25f591b1-ab38-4ad1-8dbc-a2e93987ec21_764x240.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>That five-to-one gap on web navigation is a useful gut-check. Agent doesn&#8217;t mean solved.  Navigating a real website requires spatial reasoning, state tracking, and knowing what a page means even when the layout changes and current models still struggle with all of that.</p><p>But that gap is also an opportunity. Every failed web task is a training signal. Every successful one is a demonstration of what good behavior looks like. The gap between 14% and 78% is where the next round of improvement will come from.</p><h2>RL for tool use: learning to orchestrate</h2><p>Once a model can take actions in an environment, the reinforcement learning framing becomes natural. The state is what the agent sees: terminal output, web page content, tool responses. The actions are what it does: run a command, click a button, edit a file. The reward is whether the task succeeded.</p><p><a href="https://www.bespokelabs.ai/blog/improving-multi-turn-tool-use-with-reinforcement-learning">Bespoke Labs published a nice write-up</a> on using RL to improve multi-turn tool use, showing meaningful gains on a tool-use benchmark with a relatively small amount of data.</p><p>The specific numbers matter less than the pattern: put the model in an environment where success can be checked, then let it learn by trial and error.</p><p>There&#8217;s a catch though and it&#8217;s important to address it. Without carefully designed rewards and constraints, RL can go sideways fast. The model finds weird shortcuts, games the metric, or explores aimlessly. The trick is not &#8220;do RL.&#8221; The trick is &#8220;do RL in environments where success is well-defined and the rewards don&#8217;t have loopholes.&#8221; That&#8217;s hard to get right, and it&#8217;s a big part of why this stuff is progressing steadily rather than overnight.</p><h2>Test-time compute: the think harder knob</h2><p>Here&#8217;s one more piece that ties together everything from Part 1 and Part 2.</p><p>You can make a model perform better by letting it spend more compute when answering a hard question. Generate several candidate answers. Run checks against each one. Backtrack if something looks off. Pick the best result.</p><p><a href="https://openai.com/index/learning-to-reason-with-llms/">OpenAI has been explicit</a> that their reasoning models improve with more time spent thinking at inference. It&#8217;s like the difference between blurting out the first thing that comes to mind versus pausing, working through the problem, and double-checking before you commit.</p><p>In product terms, this means: if the question is easy, respond immediately. If it&#8217;s hard, slow down and be careful. That single design decision is responsible for a lot of the &#8220;it suddenly got way smarter&#8221; feeling people have been reporting.</p><h2>Memory, Part 2: the missing piece for real work</h2><p>In Part 1, I described memory as personalization. The system remembers your preferences and facts about your life. That&#8217;s useful for casual conversations and everyday questions. But for agents doing real work, memory becomes something different. It becomes &#8220;state&#8221;.</p><p>If you want an agent that can help you with a project over days or weeks, it needs to remember: what files it touched last time, what constraints you care about, what the plan was, what already failed. Without that continuity, every conversation starts from zero, and the agent is useless for anything that takes longer than a single chat session.</p><p>This is why memory is shipping across every major platform right now: not as a nice-to-have, but as the thing that makes agents viable for actual work.</p><p><strong><a href="https://help.openai.com/en/articles/8590148-memory-faq">Open AI - </a></strong><a href="https://help.openai.com/en/articles/8590148-memory-faq">ChatGPT</a></p><p>Saved memories + chat history reference. Temporary Chat mode available for sessions you don&#8217;t want stored.</p><p><strong><a href="https://blog.google/products-and-platforms/products/gemini/temporary-chats-privacy-controls/">Google - </a></strong><a href="https://blog.google/products-and-platforms/products/gemini/temporary-chats-privacy-controls/">Gemini</a></p><p>Temporary Chats with their own retention policies. Google&#8217;s take on the same tradeoffs.</p><p><strong><a href="https://www.theverge.com/news/776827/anthropic-claude-ai-memory-upgrade-team-enterprise">Anthropic -</a></strong><a href="https://www.theverge.com/news/776827/anthropic-claude-ai-memory-upgrade-team-enterprise"> Claude</a></p><p>Optional, editable memory with incognito mode. For developers: an explicit memory tool agents can read and write across sessions.</p><p>Under the hood, all of these systems work roughly the same way: extract interesting things from the conversation, store them, retrieve relevant ones later, and inject them into context. It&#8217;s not fundamentally different from a developer keeping notes in a project README. But it&#8217;s the thing that transforms a stateless function call into something that feels like a collaborator.</p><h2>Robotics: same loop, harder data</h2><p>Everything so far has been digital: terminals, code repos, web pages. The environments are virtual, and running another thousand trials costs electricity but not much else.</p><p>Robotics is the same intellectual story, but with a crucial practical difference: the environment is the physical world, and data is expensive to collect.</p><p>You can&#8217;t just spin up another container. You need a physical robot, a physical workspace, and real time. This is why robotics research is obsessed with datasets and demonstration pipelines. The bottleneck isn&#8217;t the algorithm, it&#8217;s the data.</p><p>A few examples of how the field is tackling this:</p><p><strong>Open X-Embodiment</strong> (the RT-X project) assembled a dataset of <a href="https://robotics-transformer-x.github.io/">over one million real robot trajectories spanning 22 different robot embodiments</a>. This is the robotics equivalent of &#8220;scale the dataset and train a generalist&#8221; &#8212; same instinct as large language models, just applied to physical manipulation.</p><p><strong>RoboCat</strong>, from DeepMind, made the self-improvement loop explicit. <a href="https://deepmind.google/blog/robocat-a-self-improving-robotic-agent/">Their blog describes a five-step process</a>: collect 100 to 1,000 demonstrations of a new task, fine-tune the model, let it practice about 10,000 times to generate more data, add that data back into training, and train the next version. That is reinforcement learning in the most literal, tangible sense: practice, measure, improve, repeat.</p><p><strong>RT-2</strong>, also from DeepMind, took a different angle. <a href="https://deepmind.google/blog/rt-2-new-model-translates-vision-and-language-into-action/">They trained a model that learns from both web data and robotics data</a>, essentially asking: can the language understanding you get from reading the internet transfer to controlling a robot arm? <a href="https://robotics-transformer2.github.io/assets/rt2.pdf">The robot demonstration data was collected with 13 robots over 17 months</a> in an office kitchen environment. Seventeen months of 13 robots making lunch. The patience required is staggering, and it&#8217;s also why progress in robotics feels slower than progress in software &#8212; the feedback loop just takes longer to close.</p><div><hr></div><h2>The loop that makes people say &#8220;exponential&#8221;</h2><p>Now step back and look at the whole picture.</p><p>When people say AI progress is accelerating, what they&#8217;re reacting to is the closing of a feedback loop:</p><ol><li><p>The model can take actions - use a terminal, write and run code, navigate a web page, and move a robot arm.</p></li><li><p>Success can be measured automatically - tests pass, tasks complete, and rubrics are satisfied. No human needed in the loop.</p></li><li><p>Successful trajectories become training data. The model learns from what actually worked</p></li><li><p>Better training data produces a better model, which generates even better trajectories on the next round.</p></li><li><p>Go back to step 1. Each generation bootstraps the next one</p></li></ol><p>If that loop gets cheap, fast, and mostly automated, progress feels non-linear, because it is. Each generation bootstraps the next one.</p><p>There are real brakes on this, and they&#8217;re worth being honest about. Evaluation is hard to get right, bad metrics produce bad models. Reward hacking is real. Models find creative ways to game benchmarks without actually being useful. Safety constraints add friction, intentionally. And in the physical world, everything moves at the speed of atoms, not bits.</p><p>But the direction is clear. We&#8217;re going from:</p><blockquote><p><strong>Models that talk</strong> &#8594; <strong>Systems that act</strong> &#8594; <strong>Systems that learn from acting</strong></p></blockquote><p>And that&#8217;s the deepest reason they keep getting smarter</p><div><hr></div><h2>What this looks like in practice: sales as an RL environment</h2><p>Everything I&#8217;ve described so far shares a structure. There&#8217;s an environment. The agent takes actions. Outcomes are measurable. The loop closes.</p><p>The pattern holds outside software too. Take sales. A sales call is an environment. The rep asks discovery questions, handles objections, creates urgency, tries to close. The outcome is measurable: did the deal advance, did the stakeholder engage and you don&#8217;t need a human to label it. The CRM and the next call tell you.</p><p><strong>This is exactly the loop we&#8217;re building at <a href="http://ampup.ai">AmpUp</a>. Here how it maps:.</strong></p><p><strong>Observe</strong>. Every call gets processed through LLM extraction. We score behaviors from a dynamic ontology of hundreds of possible patterns, customized per customer and industry. We extract deal intelligence, stakeholder signals, coaching moments, and reusable techniques. Every interaction produces structured data, not just text.</p><p><strong>Recommend</strong>. Before the next call, the system pulls from four memory systems. What happened on prior calls. What the org has learned across hundreds of deals. What playbook moves work at this stage. What this specific rep needs to improve. After the call, it produces an updated deal state, coaching on specific moments, and the single highest-impact next action.</p><p><strong>Measure</strong>. Did the rep follow the advice? Did the deal move? Did the behavior improve? Every outcome gets linked back to the recommendation that preceded it. Revenue lift analysis is the reward signal. Which behaviors actually correlate with deals moving forward?</p><p><strong>Learn</strong>. Behaviors that predict wins get promoted in coaching. Strategies that don&#8217;t move deals get deprioritized. The playbook grows with every call. Top performers&#8217; best questions, objection handles, and closing techniques get extracted, cleaned up, and made available to every rep on the team. When a senior seller leaves, their expertise stays.</p><p>The insight is the same one that makes SWE-bench work for code. You don&#8217;t need humans in the loop to generate the reward signal. Deal advancement is the unit test. The CRM is the test runner.</p><p>The models aren&#8217;t getting smarter by reading more sales books. They&#8217;re getting smarter by influencing the sales behavior of real sellers. Observing real calls, measuring what moves deals, and building organizational memory that compounds with every interaction.</p><div><hr></div><h2 style="text-align: center;"><em>The 30-second version, if you read both parts</em></h2><p style="text-align: center;"><em>Part 1: The internet is the base layer. The real training signal is now human feedback, expert judgment, and structured preferences - a multi-billion-dollar industry. Memory adds a personal layer on top.</em></p><p style="text-align: center;"><em>Part 2: When models can use tools and operate in environments with testable outcomes, their successes and failures become training data. That creates a self-reinforcing loop. The same story is playing out in code, web tasks, and robotics &#8212; just at different speeds.</em></p><p style="text-align: center;"><em>The models aren&#8217;t getting smarter by reading more Reddit.</em></p><p style="text-align: center;"><em>They&#8217;re getting smarter by doing things, failing, and trying again with increasingly good taste about what &#8220;good&#8221; looks like.</em></p><p style="text-align: center;"><em>Not so different from the rest of us, really.</em></p><div><hr></div><p><em>This is Part 2 of a two-part series. If you missed it, <a href="https://amit.ampup.ai/p/how-llms-keep-getting-smarter-the">Part 1: How LLMs Keep Getting Smarter </a>covers RLHF, the alignment supply chain, and why expert data is worth billions.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How LLMs Keep Getting Smarter: The Feedback Economy]]></title><description><![CDATA[Why the most valuable resource in AI isn't compute or data &#8212; it's people who can tell the LLM when it's wrong]]></description><link>https://amit.ampup.ai/p/how-llms-keep-getting-smarter-the</link><guid isPermaLink="false">https://amit.ampup.ai/p/how-llms-keep-getting-smarter-the</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Sat, 28 Mar 2026 17:06:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d14e8532-44cb-4257-9b8e-428dce5fae45_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few days ago I was at lunch with a friend. He&#8217;s an engineer(not an AI person) and he asked a question I&#8217;ve been hearing a lot lately: </p><p><em>&#8220;How does this thing keep getting smarter when the internet is full of nonsense?&#8221;</em></p><p>It&#8217;s a great question. And if you go looking for the answer, you&#8217;ll find yourself neck-deep in research papers about reinforcement learning from human feedback, constitutional AI, reward modeling, and a dozen other terms that each require their own dedicated time. I end up spending a lot of time in these rabbit holes in order to keep an up-to-date understanding of where the tech is and where it&#8217;s going.</p><p>This post is my attempt to climb back out and explain what I found.</p><div><hr></div><h3>A quick recap</h3><p>In 2023, I wrote a <a href="https://www.thoughtspot.com/data-trends/ai/what-is-transformer-architecture-chatgpt">four-part series</a> breaking down how ChatGPT works: Transformers, embeddings, attention, the whole engine. If you haven&#8217;t read it, the short version is: these models learn to predict the next word by reading enormous amounts of text, and the Transformer architecture is what makes that work at scale.</p><p>That story hasn&#8217;t changed. Transformers are still the engine. But the engine isn&#8217;t the interesting part anymore. What&#8217;s changed dramatically is everything around the engine. How the models get shaped after training, who&#8217;s doing the shaping, and why the whole operation now involves PhD-level experts getting paid $125 an hour to tell a model it&#8217;s wrong.</p><p>My friend didn&#8217;t need a lecture about attention heads. He needed someone to explain the new stuff. So that&#8217;s what this blog tries to do.</p><div><hr></div><h3>From pre-training to post-training</h3><p>If you read my 2023 series, you can skip this. If you didn&#8217;t, a base language model is trained to predict the next word. You then feed it the entire internet, let it read billions of sentences, and eventually it builds a surprisingly rich internal model of how language works. Facts, style, reasoning patterns: it absorbs all of it, along with plenty of garbage, because the internet is full of it.</p><p>That&#8217;s pretraining. It gives the model raw capability.</p><p>Then comes a second phase, which is post-training. It&#8217;s where you take that capable-but-chaotic base model and shape it into something that behaves like an assistant, something that follows instructions, is helpful, avoids saying terrible things, and generally acts like a product instead of a weird autocomplete engine.</p><p>In 2023, I treated post-training as a footnote. But it&#8217;s 2026, and post-training is arguably the story.</p><h2>Three dials, and only one of them is obvious</h2><p>When my friend says that LLMs are getting smarter, he&#8217;s reacting to improvements that come from turning three knobs. Most people only know about the first one.</p><p><strong>Dial 1: Throw more compute at training.</strong> This is the obvious one: bigger models, more GPUs, and longer training runs. But the nuance people miss is that the compute isn&#8217;t just going into pretraining anymore. A serious chunk now goes into reinforcement learning after pretraining. <a href="https://openai.com/index/learning-to-reason-with-llms/">OpenAI has been unusually direct</a> about the fact that their reasoning models improve with more &#8220;train-time compute&#8221;, and they&#8217;re talking about the RL phase, not just reading more on the internet.</p><p><strong>Dial 2: Better data but not more internet.</strong> This is the part most people miss entirely, and it&#8217;s where the money is flowing. I&#8217;ll spend a lot of time on talking about this in this blog.</p><p><strong>Dial 3: Think harder at inference time.</strong> Even without changing the model, you can get better answers by letting the system spend more time on harder problems such as generating multiple candidates, checking its own work, backtracking when something doesn&#8217;t look right. <a href="https://openai.com/index/learning-to-reason-with-llms/">OpenAI calls this &#8220;test-time compute&#8221;</a> and describes performance improvement as the model is given more time to think. If the question is easy, answer fast. If it&#8217;s hard, slow down and verify. That single product decision changes how smart the system feels to a user.</p><p>Keep these three dials in your head. Everything else described in this blog is basically an implementation detail for Dial 2.</p><h2>RLHF: turning human taste into a training signal</h2><p>Most people have a vague sense that RLHF - Reinforcement Learning from Human Feedback exists. Few have internalized what it does and why it matters so much.</p><p>Here&#8217;s the setup. You have a base model that can generate text. You show it a prompt, and it produces several possible answers. Then you ask a human: Which of these is better? The human picks one. Do this thousands of times, and you have a dataset of human preferences, not correct answers in a textbook sense, but a record of what humans actually prefer when they read these outputs.</p><p>Now here&#8217;s the clever part: once you have enough of these preference rankings, you can train a separate model that predicts what humans would prefer. And once you have that preference model, you can use it to train the actual language model at scale, without a human in the loop for every single example.</p><p>The canonical description of this pipeline comes from OpenAI&#8217;s InstructGPT work. The paper lays out three steps: supervised fine-tuning (humans write good answers), preference data collection (humans rank model outputs), and optimization (train the model to produce the kind of outputs humans prefer). <a href="https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf">They started with 40 contractors</a> doing the labeling.</p><blockquote><p><strong>Forty people. That&#8217;s it. And from that modest beginning, they got a result that should permanently change your intuition about how these systems work:</strong></p></blockquote><p><a href="https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf">A 1.3-billion-parameter model, after alignment, was preferred by humans over the 175-billion-parameter base GPT-3.</a> The small model, roughly 130 times smaller, trained with feedback, beat the giant model trained without it because it had better post-training data.</p><p>This is the single most important intuition in modern AI: pretraining gives a model capability, but post-training gives it behavior. And behavior is what people actually experience.</p><h2>This then became an industry overnight</h2><p>Let me walk you through a few examples, because I think economics tells the story better than any technical paper.</p><p><strong>Surge AI</strong> went from being a relatively unknown data labeling company to, according to <a href="https://www.reuters.com/business/scale-ais-bigger-rival-surge-ai-seeks-up-1-billion-capital-raise-sources-say-2025-07-01/">Reuters reporting</a>, generating over $1 billion in revenue and seeking to raise up to $1 billion at a valuation above $15 billion. That&#8217;s the market telling you exactly how valuable high-quality training signals have become.</p><p><strong>Scale AI</strong> had a cautionary moment. When Meta took a large stake in Scale, <a href="https://www.reuters.com/business/google-scale-ais-largest-customer-plans-split-after-meta-deal-sources-say-2025-06-13/">Google reportedly planned to cut ties</a>, largely due to concerns that the Meta involvement could expose proprietary AI plans. <a href="https://techcrunch.com/2025/06/18/openai-drops-scale-ai-as-a-data-provider-following-meta-deal/">OpenAI followed suit</a>, which means: the prompts, rubrics, and failure cases used to train a model are now treated as strategic intellectual property. Your training data pipeline is your secret sauce, and you don&#8217;t want it anywhere near a competitor.</p><p><strong>Handshake</strong>, a college recruiting platform pivoted into data labeling and, according to <a href="https://www.lennysnewsletter.com/p/inside-handshake-garrett-lord">Lenny&#8217;s Newsletter</a>, was on track to blow past $100 million in annual recurring revenue in 12 months. Why? Because they had a network of domain experts: something the AI labs desperately needed. <a href="https://www.businessinsider.com/handshake-ceo-ai-training-evolving-generalists-to-stem-experts-pay-2025-7">Handshake&#8217;s CEO told Business Insider</a> that the industry is shifting from generalist labelers to specialized STEM experts, with people earning $100 to $125+ per hour on the platform.</p><p><strong>Mercor</strong> raised <a href="https://www.mercor.com/blog/series-c">$350 million at a $10 billion valuation</a>, doing something similar that is connecting AI labs with domain experts for training data.</p><p><a href="https://www.theverge.com/cs/features/831818/ai-mercor-handshake-scale-surge-staffing-companies">The Verge wrote a good overview</a> tying all of these companies together, and the picture it paints is striking: the most valuable data in AI is no longer text scraped from the web. It&#8217;s structured expert judgment. If this feels like a weird twist in capitalism, that&#8217;s because it is. The scarce resource in AI is no longer compute or data in the traditional sense. It&#8217;s the structured expert judgement - It&#8217;s people who can tell the model when it&#8217;s wrong and explain why.</p><h2>So why doesn&#8217;t it become &#8220;garbage in, garbage out&#8221;?</h2><p>This is the question my friend was really asking, and by this point, you can probably see the answer.</p><p>The internet is the base layer. It gives the model vocabulary, world knowledge, and a general sense of how language works. But the internet is not what shapes the product you interact with. That layer comes later.</p><p>What you&#8217;re experiencing is that the product is shaped by feedback: human preferences, expert rubrics, test results, and carefully designed reward signals. That&#8217;s why a model trained on the whole internet can produce something that feels thoughtful and careful instead of sounding like a random Reddit thread.</p><p>A simple way to think about it is: <em><strong>Pretraining is the clay. Post-training is the sculpting.</strong></em></p><h2>The internet taught LLMs what things are, not how to do things</h2><p>There&#8217;s a deeper point here that I think most people haven&#8217;t considered. Starting in the late &#8216;90s, we created massive incentives for people to put content online through ads, social capital, and commercial intent. If you had something to say, or something to sell, or something to promote, the internet gave you a reason to publish it. And all of that content became the training data for the first generation of LLMs.</p><p>But think about what kind of knowledge the incentive structure produced. It was overwhelmingly descriptive. What is the capital of France? What year did World War II end? What are the symptoms of diabetes? How does photosynthesis work? The internet is spectacularly good at declarative knowledge &#8212; facts, explanations, opinions, descriptions of how the world is.</p><p>What&#8217;s mostly missing is procedural knowledge - how you actually do things. How do you do your taxes when you&#8217;re a freelancer with income in two states? How do you build a marketing campaign for a B2B SaaS product with a $50K budget? How does a doctor actually work through a differential diagnosis when a patient presents with vague symptoms? Even when people write &#8220;how-to&#8221; guides, they tend to stay abstract. The messy, step-by-step, judgment-heavy process of doing real work rarely makes it onto a blog post, because there&#8217;s no incentive to publish it, and it&#8217;s hard to articulate anyway.</p><p>This is why LLMs are remarkably good at expository writing: essays, summaries, explanations but have been much less impressive at actually doing things the way humans do them. The training data was lopsided.</p><h3>What&#8217;s starting to change</h3><p>What&#8217;s changing as we bring LLMs into the flow of real work: coding, data analysis, customer support, research. They&#8217;re starting to observe and participate in procedural tasks. Every time an agent writes code and runs it, debugs an error, navigates a workflow, or helps someone through a multi-step process, that interaction is generating exactly the kind of procedural data that was absent from the original internet. Slowly, that data is building up. And it&#8217;s going to make these systems dramatically more useful.</p><p>It&#8217;s also, frankly, a little scary from an employment perspective. </p><blockquote><p><strong>If the first generation of LLMs could write like us, the next generation is learning to work like us. My default view is it&#8217;ll work out better as these tools tend to make individuals more capable rather than replacing them entirely but it&#8217;s a transition worth thinking about honestly.</strong></p></blockquote><p>This also sets up Part 2 of this series, where we&#8217;ll look at how agents operating in real environments are generating this kind of procedural data at scale  and why that changes the pace of improvement.</p><h2>Memory: the reason it feels like it&#8217;s learning <em>you</em></h2><p>Now let&#8217;s talk about the part that makes people say &#8220;it&#8217;s getting to know me.&#8221;</p><p>If you use ChatGPT, Claude, or Gemini regularly, you&#8217;ve probably noticed they remember things across conversations. Your preferences, facts about your life, things you&#8217;ve told them before. It feels like the model is learning.</p><p>What&#8217;s actually happening is less magical than it sounds and more practical than most people think.</p><p>Memory is almost never the model changing its weights. The underlying neural network itself is the same for everyone. What&#8217;s different is a per-user layer on top . A useful way to think about it like the model is an application, and memory is your personal config fileIn practice, memory systems tend to remember two categories of things:</p><p><strong>Preferences.</strong> Things like &#8220;use simple language,&#8221; &#8220;stop using em dashes,&#8221; or &#8220;when you link my LinkedIn in a post, make it sound natural, not like a press release.&#8221; These seem trivial, but they&#8217;re incredibly high-leverage because they remove friction from every future conversation.</p><p><strong>Personal context. </strong>Information that changes how responses should be tailored. Things like &#8220;I have a 13-year-old daughter who&#8217;s interested in coding&#8221; or &#8220;I asked about how to prep her for USACO.&#8221; A single fact like that changes how the assistant answers dozens of future questions, from programming language recommendations to summer camp suggestions.</p><p>Under the hood, most memory systems work roughly the same way: extract candidate memories from conversation, store them as structured text (sometimes with embeddings for retrieval), fetch relevant ones when you start a new chat, and inject them into the context. That&#8217;s it. Not consciousness, not a learning brain. Just retrieval plus context injection.</p><p>That said, memory is legitimately controversial. There&#8217;s a real tension between &#8220;this is useful&#8221; and &#8220;I didn&#8217;t realize it was keeping a file on me.&#8221; That tension is worth taking seriously. But if the goal is to understand why these systems feel like they&#8217;re improving over time, the framework is pretty clean:</p><p>Pretraining gives the model language.</p><p>Post-training gives it behavior.</p><p>Memory gives it you.</p><h2>Where this is headed</h2><p>Everything I&#8217;ve described so far is about shaping the model&#8217;s responses through feedback, expert data, and personal memory.</p><p>In Part 2, the story takes a turn. The model stops being just a chatbot. It becomes something that can act, run code, browse the web, use tools, operate in real environments. And when it can act, something interesting happens: its successes and failures become a new kind of training data.</p><p>That&#8217;s the feedback loop that has people using words like &#8220;exponential.&#8221;</p><p>We&#8217;ll get into it next.</p><div><hr></div><p><em>This is Part 1 of a two-part series. <a href="https://amit.ampup.ai/p/how-llms-went-from-chatbot-to-coworker">Part 2: How LLMs Went From Chatbot to Coworker </a>covers tool use, autonomous coding, web agents, robotics, and why the learning loop is about to get a lot faster.</em></p><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Quiet Extinction]]></title><description><![CDATA[We gave AI the apprentice seat. We forgot to save one for the humans.]]></description><link>https://amit.ampup.ai/p/the-quiet-extinction</link><guid isPermaLink="false">https://amit.ampup.ai/p/the-quiet-extinction</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:39:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/97238f8c-f901-4408-a23a-5c00558ffdce_1544x854.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Jeff Hawkins has a bleak way of putting our moment in perspective. In <em>A Thousand Brains</em>, he points out that every civilization is ephemeral &#8212; on the scale of universal time, the interval between a civilization&#8217;s invention of electromagnetic communication and its extinction is like the flash of a firefly. We appear, briefly, and we vanish. The only thing that outlasts us is what we successfully transmitted.</p><p>For the first time in human history, we have two kinds of heirs to teach. Biological ones &#8212; children, apprentices, the next generation of people who will carry what we built into a future we won&#8217;t see. And now digital ones: AI systems that are, in a very real sense, inheriting our accumulated knowledge, our judgment, our institutional memory.</p><p>We&#8217;ve spent thousands of years developing sophisticated machinery for teaching biological heirs. We invented schools, apprenticeships, residencies, and mentorship. We understood, at a bone level, that knowledge transmission requires time, struggle, failure, and consequence. You cannot hand someone a textbook and call them a surgeon.</p><p>But here&#8217;s the point - We are not thinking nearly carefully enough about what we&#8217;re teaching our digital heirs. And we are actively dismantling the machinery for teaching our biological ones &#8212; at the exact moment we need both.</p><div><hr></div><h3><strong>The Real Leverage - The Twenty Years Spent Behind the Tool</strong></h3><p>Here are some personal data points. Right now, with Claude Code, I can produce what would have taken me and ten engineers working together. I have twenty years behind me &#8212; ML teams at Google running fifty experiments a quarter against a ten-billion-dollar revenue base, CTO of ThoughtSpot scaling to a four-and-a-half-billion-dollar valuation, and now building <a href="http://ampup.ai">AmpUp</a>, a sales brain that learns from every customer interaction. With Claude Code, I am thinking harder than I ever have and producing more than I ever could. The leverage AI gives me is real and it is extraordinary.</p><p>That is not a brag. It is the setup for the most uncomfortable question I know how to ask.</p><p>The tool is powerful in my hands because of everything that came before it. Twenty years of experiments - some successes like spotting the bug before it reached production, closing deals by multi-threading, building architecture that scaled and just as many failures: wrong decisions, models that broke in production, deals I should not have lost, products I should not have built. Twenty years of taking chances, some that paid off, many that didn&#8217;t. The AI amplifies what is already there. Every experiment I have run, every risk I have taken, every success and failure I have lived through, that is the substrate the tool is operating on.</p><blockquote><div class="pullquote"><p><em><strong>Take away the substrate and you do not get ten-engineer productivity. You get something that moves fast and breaks things in ways you do not notice until it is expensive.</strong></em></p></div></blockquote><p>So the question is not how powerful this tool can make a senior person. We are answering that in real time. The question is: are you building any path for someone to accumulate twenty years?</p><p>The answer, right now, is almost nobody and the numbers are starting to show it.</p><p>PwC&#8217;s internal slides show entry-level audit hiring falling 39% by 2028. Across all Big 4s, graduate job postings are down 44% year over year. In Big Tech, entry-level roles have been cut in half compared to pre-pandemic levels. Unemployment among recent college graduates has risen 50% since 2022 &#8212; while overall youth unemployment held flat.</p><p>The jobs did not disappear. They were never created. If one senior person with AI does what eleven used to do, you do not need ten juniors to support them. No one making these decisions is irrational &#8212; they are optimizing the metrics that exist. It is locally rational, company by company, quarter by quarter. It is collectively catastrophic, and we are deep into it.</p><div><hr></div><h3><strong>The Vanishing Apprentice: The Junior Wasn&#8217;t Cheap Labor. They Were the Curriculum.</strong></h3><p>Here is what gets lost in the efficiency calculation. The junior was not a cost to be optimized. The junior was a learning organism in a critical developmental window and the work they did, messy and imperfect as it was, served a purpose that had nothing to do with output.</p><p>The bad calls, the lost deals, the code that does not scale, the architecture decision that haunts a codebase for three years &#8212; that is not waste. That is the curriculum. Senior salespeople got good by being terrible junior salespeople first. Senior engineers got good by writing bad code for three years and having someone senior explain exactly why it was bad. Senior physicians got good by doing residencies, by seeing ten thousand patients under supervision, by making the hard call at 3am and being wrong sometimes and understanding why.</p><p>There is no other path we have ever found. Every apprenticeship model in every field requiring deep judgment is built on the same foundation: you must be allowed to fail consequentially, under guidance, long enough to develop instinct. </p><blockquote><p><em><strong>Instinct is not something you can download. It is something you build, slowly, through the specific kind of suffering that comes from caring about outcomes and getting them wrong.</strong></em></p></blockquote><p>If you think this is a sales and engineering problem, consider medicine. The oldest apprenticeship model we have &#8212; the one we made into law because we understood it was non-negotiable. Now imagine AI handles the diagnostic work that residents currently do. The pattern matching, the differential diagnosis, the routine interpretation. It may be better than a tired resident at 3am. Malpractice exposure goes down. Efficiency goes up. Run that logic for a decade. One day you need judgment &#8212; real judgment, built from ten thousand hours of supervised failure &#8212; and you look around the room, and the room is full of people who never got the reps.</p><p>But some are taking note. And what they&#8217;re doing is worth paying attention to.</p><div><hr></div><h3><strong>The Founder Who Said No and Built His Own Rules</strong></h3><p>A founder I know banned new grads from using AI for coding for their first three months. No exceptions. My first instinct was that it sounded reactionary &#8212; the kind of rule that comes from someone who learned on hard mode and wants everyone else to suffer the same way out of nostalgia for suffering.</p><p>Then I thought about it.</p><p>He was protecting the period where you are supposed to struggle, make errors and learn from it. When you go to the patient&#8217;s room ten times, look at the chart with a different lens, until it all clicks as one cohesive story and the diagnosis clears. Where you write the bad code and a senior engineer explains exactly why it is bad and you feel the specific embarrassment that means you will never make that mistake again.</p><p>That friction is not inefficiency. It is the actual learning, the curriculum. He felt it before he could fully articulate it, and he built a rule around the feeling.</p><p>And to be clear: AI can explain, suggest, and critique. It is a remarkable tutor in many ways. But without stakes &#8212; without a world that pushes back and lets things die &#8212; the learning stays brittle. Understanding without consequence does not produce instinct. It produces confidence without depth, which is in some ways more dangerous than ignorance.</p><div><hr></div><h3><strong>What Else Gets Lost When the Junior Leaves the Room</strong></h3><p>There is a second problem hiding inside the first one, subtler and in some ways more dangerous.</p><p>Junior people were not just learning. They were bringing a perspective the organization hadn&#8217;t yet homogenized. The analyst who asks the question nobody else asked &#8212; not because they are trying to be clever, but because they genuinely don&#8217;t know it&#8217;s not supposed to be asked. The SDR who tries the unconventional pitch because they haven&#8217;t learned the conventions yet. The engineer who builds it the wrong way because nobody told them the right way, and occasionally the wrong way turns out to be better.</p><p>The junior&#8217;s outsider perspective &#8212; still forming, not yet shaped by institutional gravity &#8212; is something you cannot simulate by adjusting a temperature parameter. And you are eliminating it at exactly the moment you most need people who can see what the system cannot see about itself.</p><p>And here is the inversion that should keep you up at night: the way AI gets smarter is exactly the way humans get smarter. Exposure to real problems, action, consequence, correction. which means AI is currently sitting in the apprentice seat. Doing the entry-level work. Accumulating the reps. Getting better every quarter. The human junior is watching from outside the building, updating their LinkedIn, wondering what they did wrong.</p><p>In reality, they did not do anything wrong. The seat was quietly reassigned while everyone was looking at something else.</p><div><hr></div><h3><strong>The Common Thread: AI and the Apprentice Learn Exactly the Same Way</strong></h3><p>There is a principle underneath all of this that applies equally to humans and to AI, and it is the one I keep returning to.</p><p>Learning requires consequence. Not information. Not access. Consequence.</p><p>But consequence does not mean pain, it is that learning requires feedback your mental model did not predict. The junior engineer who ships the feature and watches it break under real load learns something no amount of code review could have taught them &#8212; not because it hurt, but because the gap between their mental model and reality was suddenly, undeniably visible. That is the consequence that matters. It is epistemic, not emotional.</p><p>AlphaGo did not get good by reading about Go. It played millions of games against itself, in environments where moves produced outcomes and outcomes updated the model. Your junior SDR does not get good from sales methodology PDFs. They get good from losing deals and sitting with the question of why &#8212; because the loss surfaces the assumption they did not know they were making.</p><p>This is equally true for the AI models that are actually getting better. The models making the most interesting progress are not the ones with more data. They are the ones in real environments &#8212; terminal bench, strategy games, physics simulators &#8212; taking actions, observing consequences, updating. That is not a coincidence. The architecture of learning is the same whether the learner is biological or digital.</p><p>Both broken apprenticeships have the same root and the same solution.</p><div><hr></div><h3><strong>What Founders Can Actually Do: Moving the Curriculum, Not Losing It</strong></h3><p>The solution is further along than most people realize &#8212;The curriculum does not have to disappear. It has to move. And the tools to move it already exist.</p><p><em><strong>Build deliberate simulation environments.</strong></em> AI roleplay for sales training was the crude first version &#8212; stiff, unconvincing, easy to game. But the technology has moved fast. A real simulator generates varied, adversarial, realistic scenarios; gives the learner an actual action space where choices matter; produces consequences(the deal dies, the system breaks, the stakeholder turns hostile)and delivers a structured debrief so you understand not just what happened but what you missed and why.</p><p>You can now build that for the entire buying committee behind the closed doors you will never actually be in. The CFO who always kills it on ROI. The champion who goes dark in week three for reasons you will not understand until it is too late. The procurement officer who appears in the final hour with requirements nobody mentioned. The competitor who shows up sideways in the last conversation before the decision. You can give a junior rep five hundred consequential reps before they ever touch a real account. The same logic applies to engineering: simulating production incidents, performance constraints, feature requests with hidden tradeoffs, systems that work locally and fail at scale.</p><p>The instinct that used to take three years of real experience can start forming in months of deliberate simulation. Fidelity is everything: A simulator that&#8217;s too easy just teaches you to win easy simulations. Make it uncomfortable. Make it honest. This is what we are doing at AmpUp for sellers.</p><p>But simulation alone is not enough. We also need structural commitments: protected junior roles that exist specifically for development, not just output. Funded apprenticeships. Policy incentives that make it rational for companies to invest in the next generation even when the quarterly math says not to. The curriculum can move into simulation, but it cannot live there entirely. At some point, the reps have to be real, and someone has to be willing to absorb the cost of letting a junior be junior.</p><p><em><strong>Capture the junior&#8217;s arc, not just the seniors&#8217; wisdom.</strong></em><strong> </strong>The workflows being automated right now&#8212;and the judgment being encoded into your prompts and processes&#8212;come almost entirely from your most senior people. The junior&#8217;s arc isn&#8217;t making it in&#8212;not into your systems, not into your processes, not into the organizational layer that will outlast any individual.</p><p>Think of it like this: a child raised only on the wisdom of elders, never allowed to fall down and figure out how to get up, does not actually inherit the full culture. It inherits a curated version of it&#8212;confident, capable within the known distribution, and missing something it does not know it is missing: the unknown unknowns.</p><p>That is the internal AI layer that most companies are currently building. Unlike a foundation model trained on the breadth of human output, your internal systems will only ever know what your people taught them. Right now, that represents only a thin slice of what your people actually know.</p><div><hr></div><h3><strong>What We&#8217;re Leaving Behind and What We Are Actually Inheriting</strong></h3><p>Hawkins worried about what AI would carry forward after we are gone. I am worried about what we are teaching right now and what we are forgetting to teach the humans alongside it.</p><p>If AI is genuinely the vessel for what we have built &#8212; carrying our accumulated knowledge past the firefly flash of our civilization &#8212; then what we put in the vessel is everything. A digital heir that learned by doing, by consequence, by failure and recovery, by the full arc of how expertise actually develops, carries something richer and more durable than one that learned only from what the last generation wrote down when they were already wise. The same is true of the human heirs working beside it.</p><p>Simulation is part of the answer. Building real consequence environments for both humans and AI deliberately, at scale, as a design priority &#8212; is work that needs to start now and will take a decade to get right. But it is not the whole answer, and the people who need to be working on this problem are not yet working on it with anywhere near enough urgency.</p><p>We need more minds on this. Not just AI researchers. Not just HR leaders. Founders, educators, policymakers, anyone who understands that the arc from junior to senior to the kind of senior who builds the next generation &#8212; that arc is not optional infrastructure. It is the infrastructure. And it needs to start earlier than we think, in the formative years of education, empowering kids to learn faster in consequence-rich, engaging environments that build instinct for the real world without the real world&#8217;s costs</p><p>The future is not just being automated. It is being inherited. And right now we are not leaving much worth inheriting.</p><p>We gave AI the apprentice seat. The human junior is still outside the building. We need to bring them back in, before there is no one left who remembers what the seat was for.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Yes, You’re Royally Screwed. Now Here’s What to Do About It. ]]></title><description><![CDATA[A SaaS Founder&#8217;s Survival Guide to the SaaSpocalypse | By Amit Prakash | Founder & CEO of AmpUp | Co-founder & Former CTO of ThoughtSpot]]></description><link>https://amit.ampup.ai/p/yes-youre-royally-screwed-now-heres</link><guid isPermaLink="false">https://amit.ampup.ai/p/yes-youre-royally-screwed-now-heres</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Sat, 21 Feb 2026 18:57:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c7424f2d-2210-4fc9-9337-9feeb8b2600f_1376x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s 7am on a Monday. You&#8217;re a SaaS CEO staring at your stock ticker. Your company is down 34% in three weeks. Your Slack is lighting up, your board chair wants an &#8220;AI strategy deck by Friday.&#8221; Your VP of Engineering just texted you that he&#8217;s leaving for an AI startup. And your biggest customer just asked on their QBR: &#8220;Why am I paying $80 per seat when an AI agent can do this for $0.02 per task?&#8221;</p><p><strong>Welcome to the SaaSpocalypse.</strong></p><p>I&#8217;m not going to sugarcoat this. If you&#8217;re running a traditional SaaS company &#8212; one that sells access to software on a per-seat, per-month basis &#8212; you are in serious trouble. In early February 2026, nearly $2 trillion in software market capitalization evaporated. Atlassian dropped 35% in a single week. Salesforce, ServiceNow, Intuit &#8212; all cratered. Traders at Jefferies coined the term &#8220;SaaSpocalypse&#8221; and described the selling as &#8220;get me out&#8221; style.</p><p>Satya Nadella said it plainly: SaaS applications are &#8220;essentially CRUD databases with a bunch of business logic,&#8221; and that business logic is migrating to the AI tier. When the CEO of the company that <em>defined</em> the modern software industry tells you the model is dying, you should listen.</p><p>But here&#8217;s what I want to tell you that nobody else is saying: <strong>the SaaSpocalypse isn&#8217;t about AI replacing your software. It&#8217;s about you running a software company when you need to be running an AI company.</strong> Those are fundamentally different things. And the journey from one to the other is the hardest transformation you&#8217;ll ever make &#8212; harder than going from on-prem to cloud, harder than going from perpetual licenses to subscriptions.</p><p>I know because I&#8217;ve made that journey. Four times &#8212; at Microsoft, Google, ThoughtSpot, and now AmpUp. More on that in a moment.</p><div><hr></div><h3>The Five Stages of SaaSpocalypse Grief</h3><p>Before I tell you what to do, let me tell you where you probably are. Every SaaS CEO I talk to is somewhere on this spectrum:</p><p><strong>Denial. </strong>&#8220;We already have AI features. We added a chatbot last quarter.&#8221; Bolting AI onto your SaaS product is like putting a spoiler on a minivan and calling it a race car.</p><p><strong>Anger. </strong>&#8220;This is just hype. Our customers aren&#8217;t going to switch.&#8221; You might be right &#8212; for the next 12 months. But the companies building AI-native alternatives are getting smarter every day, and your product is staying exactly the same.</p><p><strong>Bargaining. </strong>&#8220;Maybe we can acquire an AI company. Or hire a Head of AI.&#8221; This is the most dangerous stage because it feels like action but it&#8217;s actually avoidance. You can&#8217;t outsource your way to becoming an AI company any more than Blockbuster could have outsourced its way to becoming Netflix.</p><p><strong>Depression. </strong>&#8220;Our moat is gone. A teenager with Claude can replicate our core product in a weekend.&#8221; Partially true &#8212; but code was never your real moat. Your understanding of your customer&#8217;s problem was.</p><p><strong>Acceptance. </strong>&#8220;We need to fundamentally transform.&#8221; If you&#8217;re here, keep reading. This is where the real work starts.</p><p>I&#8217;ve lived this transformation four times &#8212; at Microsoft, Google, ThoughtSpot, and now AmpUp &#8212; and I&#8217;ve been on both sides of it. I&#8217;ve seen what doesn&#8217;t work (4&#8211;6 month AI cycles at Microsoft, like running a marathon in ski boots), what does work (shipping 50 experiments a quarter at Google against a $10B revenue base), and what happens when you&#8217;re doing AI before anyone believes in it (building a $4.5B company at ThoughtSpot starting in 2012). What I learned across all four: <strong>the transformation from SaaS company to AI company requires you to rethink nearly everything about how your company operates.</strong> Not &#8220;add an AI feature.&#8221; Not &#8220;hire a machine learning team.&#8221; <em>Rethink everything.</em></p><div><hr></div><h3>SaaS Company vs. AI Company: It&#8217;s Not What You Think</h3><p>Everyone is talking about &#8220;adding AI&#8221; to their product. That&#8217;s the wrong frame entirely. The difference between a SaaS company and an AI company isn&#8217;t whether you use machine learning somewhere in your stack. It&#8217;s a difference in DNA.</p><p>Here&#8217;s the simplest test I know: <strong>if you turned off every AI component in your product tomorrow, would your customers still pay?</strong> If yes, you&#8217;re a SaaS company with AI features. If no, you might actually be an AI company.</p><p>Gong without AI is still a call recording and CRM platform. Salesforce without AI is still a database with forms. Atlassian without AI is still a ticketing system. Turn off the AI and the product still works. The AI is decoration. Valuable decoration, maybe &#8212; but decoration.</p><p>AmpUp without AI is nothing. AmpUp (ampup.ai) is building a sales brain &#8212; a continuously learning engine that gets smarter from every customer interaction, every call, every deal. It prepares reps before every conversation, coaches them after, and compounds those learnings across the entire organization. There is no product without the AI. The AI doesn&#8217;t enhance the experience; the AI is the experience.</p><blockquote><p><em><strong>Apply the &#8220;turn off the AI&#8221; test to your own company. Be honest about what you find.</strong></em></p></blockquote><p>But the difference goes deeper than product architecture. It&#8217;s a different way of building, a different way of thinking, and a different way of making promises. And that last word &#8212; <em>promises</em> &#8212; is where the transformation has to start.</p><div><hr></div><h3>Step 1: Stop Selling Access. Start Making Promises.</h3><p>The single most important shift in becoming an AI company isn&#8217;t technical. It&#8217;s about fundamentally changing what you offer your customers.</p><p>A SaaS company sells <strong>access.</strong> &#8220;Here&#8217;s a login. Here&#8217;s what you can do with our tool. Good luck.&#8221;</p><p>A better SaaS company sells <strong>capability.</strong> &#8220;Our platform enables you to manage your sales pipeline, track your metrics, and coach your team.&#8221;</p><p>An AI company makes a <strong>promise</strong> &#8212; a specific, measurable outcome it commits to delivering. &#8220;We will deliver X outcome in Y timeframe &#8212; and we&#8217;ll prove it.&#8221;</p><blockquote><p><em><strong>The progression from access to capability to promise is the entire SaaS-to-AI journey in three words.</strong></em></p></blockquote><p>And until you know what promise you&#8217;re making, nothing else matters &#8212; not your architecture, not your hiring plan, not your AI roadmap. The promise is the North Star that everything else serves.</p><p>At <a href="https://www.ampup.ai/">AmpUp</a>, our promise is specific: we will bend the sales growth curve for your company so it is closing 30&#8211;100% more in incremental ACV in the next 6 months. That&#8217;s AmpUp&#8217;s promise &#8212; yours will be different. But the reason we can make that promise is because every interaction generates data about what works. Every pre-call briefing, every post-call debrief, every coaching moment feeds back into the system. We measure whether the rep&#8217;s behavior actually changed. We measure whether that change improved outcomes. And we compound those learnings across the entire organization.</p><p>Gong can&#8217;t make that promise. Gong sells access to a platform that records and analyzes calls. It&#8217;s a powerful tool. But it&#8217;s a tool. The customer is responsible for translating Gong&#8217;s insights into changed behavior and improved outcomes. That&#8217;s the gap between a SaaS company and an AI company.</p><p><strong>So here&#8217;s your first step.</strong> Figure out what promise you can make to your customers. Not what features you can build. Not what access you can sell. What <em>promise</em> can you make? What outcome will you deliver?</p><h3>Step 2: Rethink Your Engineering Culture</h3><p>This is the part nobody wants to hear. But once you&#8217;ve committed to a promise, you&#8217;ll immediately realize your current engineering culture can&#8217;t deliver it.</p><p>Your SaaS engineering culture &#8212; the one that made you successful &#8212; is optimized for the wrong things. Your best SaaS engineers gather requirements, do principled design, anticipate worst cases, and ship polished, well-tested features. That discipline built your company. It&#8217;s also what will kill it.</p><p>Compare the mindsets:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dVx8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dVx8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 424w, https://substackcdn.com/image/fetch/$s_!dVx8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 848w, https://substackcdn.com/image/fetch/$s_!dVx8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 1272w, https://substackcdn.com/image/fetch/$s_!dVx8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dVx8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png" width="760" height="312" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:312,&quot;width&quot;:760,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61722,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/188732880?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dVx8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 424w, https://substackcdn.com/image/fetch/$s_!dVx8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 848w, https://substackcdn.com/image/fetch/$s_!dVx8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 1272w, https://substackcdn.com/image/fetch/$s_!dVx8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e749082-2936-4a89-960a-0d6a9d49b320_760x312.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>At Google, my team didn&#8217;t build features. We built the infrastructure to run 50 experiments a quarter, measure them with statistical rigor, and ship the ones that moved the needle. That scaffolding was our real competitive advantage, not any single algorithm.</p><blockquote><p><em><strong>Your best SaaS engineers might be your worst AI engineers. The habits that made them great can actively work against them.</strong></em></p></blockquote><p>This doesn&#8217;t mean they can&#8217;t learn. But it means the transformation isn&#8217;t just strategic, it&#8217;s cultural and personal. You&#8217;re asking people to unlearn the instincts that made them successful.</p><p><strong>One caveat: </strong>I&#8217;m not saying systems engineering discipline goes away. You still need reliable data pipelines, solid APIs, and infrastructure that scales. The AI engineering mindset lives <em>on top of</em> solid systems engineering, not instead of it. The mistake is when that&#8217;s the <em>only</em> mindset.</p><div><hr></div><h3>Step 3: Stop Building Features. Start Building Scaffolding.</h3><p>You&#8217;ve made your promise. You&#8217;ve started shifting your engineering culture. Now here&#8217;s the tactical question: what should your engineers actually be building?</p><p>If I could give every SaaS CEO one piece of advice, it would be this: <strong>the most important code at an AI company isn&#8217;t the product. It&#8217;s the scaffolding around the product.</strong></p><p>What do I mean by scaffolding? The measurement system that tells you whether your AI actually changed a customer&#8217;s outcome. The experimentation framework that lets you test 50 ideas a quarter instead of shipping 4 releases. The data collection pipeline that turns every user interaction into training signal. The evaluation infrastructure that tells you if your model is getting better or worse.</p><p>Think about what sits underneath Claude or ChatGPT. You see a chat box. A simple text input and a response. But underneath that tiny interface is one of the most sophisticated scaffolding systems ever built &#8212; RLHF pipelines, evaluation frameworks, measurement systems, data collection infrastructure, safety testing, red-teaming processes. The iceberg underneath the waterline is massive.</p><blockquote><p><em><strong>The visible product is 10% of the value. The scaffolding is the other 90%. Most SaaS companies have the iceberg upside down.</strong></em></p></blockquote><p><strong>Fire your product roadmap.</strong> I mean it. A SaaS company has a 12-month roadmap with committed features. An AI company has a 12-month <em>learning</em> roadmap with hypotheses to test. Replace &#8220;Build Advanced Analytics Dashboard &#8212; Q3&#8221; with &#8220;Achieve 15% improvement in customer outcome X &#8212; Q3, via 20 experiments.&#8221;</p><div><hr></div><h3>Step 4: When Code Is Free, Taste Is the Edge</h3><p>Here&#8217;s the most counterintuitive thing I&#8217;ll say in this entire blog: <strong>in the age of AI, the most valuable skill isn&#8217;t engineering. It&#8217;s taste.</strong></p><p>Generating code is now essentially free. A capable developer with an AI coding assistant can produce in hours what used to take weeks. The barrier to building software has collapsed. Which means your competitive advantage can no longer be &#8220;we have good engineers who can build complex software.&#8221; Everyone can build complex software now.</p><p>So what&#8217;s actually scarce?</p><p><strong>Product taste.</strong> The judgment to know what to build and, more importantly, what not to build.</p><p><strong>Customer understanding.</strong> The deep, almost intuitive understanding of what your customer is actually trying to accomplish &#8212; which is often different from what they say they want.</p><p><strong>Positioning.</strong> The ability to incept a concept in a customer&#8217;s mind. To create a category, name it, and own it before anyone else even realizes it exists.</p><p><strong>Taste in AI product design.</strong> AI is brilliant 80% of the time and baffling 20% of the time, and the UX is what bridges that gap. At ThoughtSpot, we were radically data-poor and we learned that smoothing the jagged edge isn&#8217;t a technical problem. It&#8217;s a product taste problem. Thoughtful UX that sets the right expectations, fallback mechanisms that maintain trust, feedback loops that close the gap over time. That playbook works whether you have a billion data points or a thousand.</p><p>Look at why Claude is beating ChatGPT for a significant and growing segment of users. It&#8217;s not because Anthropic has a fundamentally better model. It&#8217;s because Anthropic has better taste. They understood a specific user deeply &#8212; the thoughtful knowledge worker who wants a thinking partner, not a parlor trick. That&#8217;s product taste plus customer understanding plus positioning. And it&#8217;s winning against a company with 10x the resources and a massive head start.</p><p>And this connects directly back to the promise. When you&#8217;re selling access, taste is a nice-to-have &#8212; users will tolerate clunky UX if the tool is useful. When you&#8217;re making a promise about outcomes, taste is existential. Because the AI <em>will</em> be wrong sometimes, and taste is what keeps the customer trusting your promise while the system learns.</p><blockquote><p><em><strong>When code is free and models are commoditized, the company with the best taste wins.</strong></em></p></blockquote><div><hr></div><h3>Step 5: Your Moat Is Your Rate of Learning</h3><p>If I had to distill everything I&#8217;ve learned across Microsoft, Google, ThoughtSpot, and AmpUp into a single principle, it&#8217;s this: <strong>your moat is not your code, your data, or your brand. Your moat is your rate of learning.</strong></p><p>The company that runs 50 experiments a quarter will crush the company that ships 4 releases a quarter, even if the second company has 10x more engineers and 100x more data. Because each experiment teaches you something. Each learning compounds. And over time, the gap becomes uncrossable.</p><p>This is what I lived at Google. We weren&#8217;t smarter than the Bing team at Microsoft. We weren&#8217;t working harder. We were learning faster. Our cycle time from hypothesis to result was days, not months. That compounding advantage is what made the difference.</p><p>Here&#8217;s how to think about it mathematically. If your company learns and improves 1% per week through rapid experimentation, and your competitor improves 1% per quarter through traditional releases, after one year you&#8217;ve compounded 68% improvement while they&#8217;ve compounded 4%. After two years it&#8217;s 180% vs. 8%. The gap is exponential, and it never closes.</p><p>I know this from the hard side too. At ThoughtSpot &#8212; a company that was AI-native from day one &#8212; we nearly got killed because our rate of learning about the <em>market</em> wasn&#8217;t fast enough. We missed the cloud migration wave by four years. We were selling a bundled stack (database + BI + AI) when customers were moving to unbundled best-of-breed. Both times, the signals were there. Customers were telling us. The data was visible. But we weren&#8217;t set up to learn from those signals fast enough and act on them. Rate of learning isn&#8217;t just about your models. It&#8217;s about everything.</p><p>That lesson is literally why AmpUp exists. Our mission is to increase your organizational velocity by two orders of magnitude &#8212; because I learned the hard way that being AI-native isn&#8217;t enough if you can&#8217;t learn fast enough to keep up with where your market is going.</p><blockquote><p><em><strong>Most SaaS companies don&#8217;t even have a &#8220;rate of learning&#8221; metric. They measure output, not learning.</strong></em></p></blockquote><div><hr></div><h3>The Hard Truth About the Journey Ahead</h3><p>I want to be straight with you. This transformation is going to be brutal.</p><p><strong>You will lose engineers who don&#8217;t want to work this way,</strong> who find comfort in careful design and polished releases.</p><p><strong>You will have to rebuild your measurement infrastructure from scratch</strong>, because NPS scores don&#8217;t measure customer outcomes.</p><p><strong>You will ship things that feel half-baked</strong>, because you&#8217;re optimizing for learning speed.</p><p><strong>Your board will push back</strong> when you tell them you&#8217;re replacing the feature roadmap with a learning roadmap.</p><p><strong>Your pricing model will have to change</strong>, because you can&#8217;t charge per seat when the AI is doing the work.</p><p><strong>Your sales team will need to learn</strong> to sell promises instead of feature lists. That&#8217;s a completely different skill.</p><p>All of this is going to be uncomfortable. Most of it will feel like going backward before you go forward.</p><p>But the alternative is worse. Because the companies that figure this out &#8212; the ones that build the scaffolding, obsess over their rate of learning, develop the product taste to smooth the jagged edge, and have the courage to make real promises &#8212; those companies will eat your lunch. Not because they have better code, but because they have better feedback loops and they&#8217;re getting smarter every single day while your product stays exactly the same.</p><div><hr></div><h3>Five Principles, If You Remember Nothing Else</h3><p>If this piece leaves you with anything, let it be these:</p><p><strong>Your product is not your moat. Your rate of learning is.</strong> The company that compounds learning fastest wins &#8212; not the one with the most features, the most data, or the most engineers.</p><p><strong>If you can&#8217;t articulate your promise, you don&#8217;t have one.</strong> Not what your product does. What outcome you deliver. If the answer is &#8220;we give customers access to a platform,&#8221; you&#8217;re still a SaaS company.</p><p><strong>Build the scaffolding before you build the product.</strong> Measurement systems, experimentation infrastructure, feedback loops &#8212; these are the 90% of the iceberg that makes the 10% your customers see actually work.</p><p><strong>The culture shift is harder than the technology shift.</strong> You&#8217;re asking people to unlearn the instincts that made them successful. That&#8217;s personal, not just strategic.</p><p><strong>Learning velocity applies to everything &#8212; not just your models.</strong> Go-to-market, positioning, pricing, hiring. The companies that treat every function as an experiment will outlearn the ones that only experiment in the lab.</p><div><hr></div><blockquote><p><strong>The SaaSpocalypse is real. The panic is justified. But panic without direction is just noise.</strong></p></blockquote><p>The path out isn&#8217;t &#8220;add AI features.&#8221; The path out is to become a fundamentally different kind of company &#8212; one that makes promises instead of selling access, one that builds scaffolding instead of features, one that measures outcomes instead of engagement, and one that treats its rate of learning as its most precious competitive asset.</p><p>In five years, the SaaS companies that survive won&#8217;t look like SaaS companies anymore. They&#8217;ll be learning machines that happen to sell software. The ones that don&#8217;t make that shift will still exist on paper &#8212; but they&#8217;ll be the next generation of legacy systems, waiting to be acquired for their customer lists.</p><p>You already know which stage you&#8217;re in.</p><p><strong>The only question is whether you move before your customers do &#8212; or after.</strong></p><div class="pullquote"><p><em>Amit Prakash is the founder and CEO of AmpUp, an AI company building a sales brain that continuously learns from every customer interaction, and the co-founder and former CTO of ThoughtSpot, a $4.5B analytics company. Previously, he led machine learning teams at Google and worked on Bing at Microsoft. He has been building AI products since 2012.</em></p></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You Can't Feed an Ocean Through a Straw: The Case for Predictive Enablement]]></title><description><![CDATA[Why Broadcast Enablement Is Broken &#8212; and How Predictive Enablement Fixes It]]></description><link>https://amit.ampup.ai/p/you-cant-feed-an-ocean-through-a</link><guid isPermaLink="false">https://amit.ampup.ai/p/you-cant-feed-an-ocean-through-a</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Mon, 09 Feb 2026 16:42:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/11786dce-52b7-496c-834d-fe075af54dd1_721x394.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sales enablement isn&#8217;t failing because content is bad. It&#8217;s failing because it&#8217;s trying to pour an ocean of knowledge through a straw of rep attention. The fix isn&#8217;t more training; it&#8217;s Predictive Enablement: the right insight, right before the call.</p><p>Let me show you how.</p><div><hr></div><h2>The SKO Hangover</h2><p>Let&#8217;s picture a typical SKO - The Annual Sales Kick-Off. Spanning three days, it typically looks like this:</p><p><em>Day one:</em> vision and celebration.</p><p><em>Day two:</em> product and strategy.</p><p><em>Day three:</em> competitive positioning, new messaging frameworks, objection handling, demo walkthroughs.</p><p>Your reps take notes. Everyone seems engaged. Leaders are elated. By the end of day three, everyone&#8217;s exhausted but energized. This year will be different.</p><p><em>Two weeks later</em></p><p>A rep walks into a discovery call. The prospect mentions a competitor feature. The rep knows they learned counter-positioning at SKO. It was a good session. But which session? What was the specific angle? Security or scalability?</p><p>They fumble through it. Make a mental note to review the deck later. But never do.</p><p>What we don&#8217;t realize is that the knowledge retention curve is savage. Research on the forgetting curve suggests that in the months following SKO, most of what was taught is gone. In my experience, by week four, maybe 15% of SKO content is still retrievable. By week eight, it&#8217;s close to 5%..</p><p>While we assume that the SKO influence will stay year long, the half-life of batch learning is brutally short.</p><p>Last week I wrote about diagnosis - <a href="https://www.ampup.ai/blog/system-that-makes-your-best-reps-look-like-kindergarteners">Why systems can&#8217;t identify skill gaps that make reps look like kindergartners.</a> The pushback I got: </p><blockquote><p><em>&#8220;We&#8217;re already working on that. Why isn&#8217;t it working?&#8221;</em></p></blockquote><p>Because even if you solve the diagnosis perfectly, you&#8217;re still trying to feed an ocean through a straw.</p><p>What you are doing is Broadcast Enablement while what you need is Predictive Enablement.</p><div><hr></div><h2><strong>The Straw, the Ocean, and Why It Keeps Getting Worse</strong></h2><p>Your company sits on an ocean of knowledge that could matter in closing a deal - product features, competitive intel, customer stories, industry trends, pricing, persona insights.</p><p>And your rep has a straw. On a good week, they get an hour for focused learning. Most weeks, it&#8217;s just a five-minute gaps between meetings. Yet broadcast enablement keeps trying to pour the entire ocean through that straw. SKO packs in three days. QBRs, Slack channels, sales enablement sessions, and LMS modules  add more.</p><blockquote><p><em>The assumption is that if we push enough information, something will stick.</em></p></blockquote><p>But it doesn&#8217;t. Most of it spills. When that happens, we tell ourselves the rep didn&#8217;t pay attention, didn&#8217;t practice enough, didn&#8217;t take enablement seriously. But the problem isn&#8217;t effort. <em><strong>It&#8217;s the underlying delivery model.</strong></em></p><p>The straw was never designed to channel the ocean. It was meant to pull the right drop at the moment it matters. And that flaw in the model is becoming impossible to ignore.</p><p>Here are a few real life scenarios -</p><ul><li><p>A VP at a fast-growing company told me they spent three months building competitive enablement - battle cards, recorded training, practice scenarios. By the time it launched, three competitors had shipped major updates. The content was already outdated. They refreshed it. Three months later, it was stale again.</p></li><li><p>At large enterprise companies, the delay is even worse. A feature launches. Engineering writes documentation. Product marketing translates it. Enablement builds curriculum. Content gets reviewed. Legal signs off. Training gets scheduled. Reps finally complete the modules. Basically - Six months from feature launch to field readiness.</p></li></ul><blockquote><p><em>Even flawless content is useless if it arrives after the customer conversation where it mattered.<strong> </strong>That&#8217;s not a content-quality issue. It&#8217;s a delivery-model issue.</em></p></blockquote><p>And now the environment is accelerating because of AI.</p><p>Product cycles that once took quarters now take weeks. Competitive shifts that used to happen annually now happen monthly. Customer expectations jumped overnight because someone saw a demo on X. As that same VP put it, &#8220;We used to update positioning quarterly. Now it feels like it should be weekly.&#8221;</p><p>The ocean isn&#8217;t just large anymore. It&#8217;s constantly expanding. But the straw hasn&#8217;t changed. You can&#8217;t fix this by pushing harder. The model was never built for how reps actually learn today.</p><div><hr></div><h2>How Your Best Reps Actually Prepare</h2><p>When enablement can&#8217;t deliver the right insight at the right moment, reps don&#8217;t wait. They adapt. The best reps quietly build their own system.</p><p>Here&#8217;s what your best rep is doing - (Remember Sarah from last week)Before every customer meeting, she spends 30-45 minutes on specific research. What does this company do? What are their strategic priorities? What problems are they likely facing? How does our product map to those problems? What objections should I anticipate? What customer stories will resonate?</p><p>She&#8217;s not pulling from some central repository. She&#8217;s synthesizing in real-time. Industry news, company earnings calls, LinkedIn activity, analyst reports, product launches, competitive moves.</p><p>Every meeting gets custom preparation. This works. But it&#8217;s not scalable. Most reps don&#8217;t do it because they can&#8217;t. They are instead spending time in back-to-back meetings, dealing with admin, trying to close deals.</p><blockquote><p><em>That&#8217;s why reps like Sarah look like exceptions. Like they have some special talent. But really, they&#8217;ve just accepted that official enablement won&#8217;t help them, so they&#8217;ve built their own system.</em></p></blockquote><p>That&#8217;s the real enablement problem.</p><p>The question isn&#8217;t &#8220;how do we get more reps to be like Sarah?&#8221; It&#8217;s &#8220;how do we automatically give everyone what Sarah has to build manually?</p><div><hr></div><h2>From Broadcast to Predictive: A Different Architecture</h2><p>What if instead of pushing the ocean through the straw, you predicted which single drop the rep needs right now?</p><p><em><strong>From -</strong> &#8220;Here&#8217;s everything you need to know about competitive positioning.&#8221;</em></p><p><em><strong>To:</strong> &#8220;You have a meeting with a fintech company in 30 minutes. They&#8217;re evaluating you against Competitor X. Here&#8217;s the one positioning angle that&#8217;s worked in the last three deals with similar companies. Here&#8217;s the one objection they&#8217;ll probably raise and how to handle it. Here&#8217;s the one customer story that will resonate.&#8221;</em></p><p>Bite-sized. Contextual. Just-in-time.</p><p>This is <strong>Predictive Enablement</strong>.</p><p>It&#8217;s not just a feature. It&#8217;s a fundamentally different model for how sales teams learn and improve. Where broadcast enablement pushes content at reps hoping something sticks, Predictive Enablement does three things simultaneously:</p><p><strong>1. Predicts</strong> what each rep needs for this specific moment</p><p><strong>2. Learns</strong> from what actually works in real conversations</p><p><strong>3. Coaches</strong> in the flow of work, when decisions matter</p><p>A GTM strategist puts it : &#8220;I should be able to say, &#8216;I&#8217;m meeting a CTO at a financial services company, help me prep,&#8217; and get guidance based on what&#8217;s worked in similar conversations.&#8221;</p><p>Not a quarterly training. Not a 47-page deck. Help that shows up when it matters.</p><h4><strong>Why You Can&#8217;t Retrofit This</strong></h4><p>You can&#8217;t get to Predictive Enablement by adding AI to broadcast enablement. The architectures are fundamentally different.</p><p><strong>Broadcast enablement is built for:</strong></p><ul><li><p>Creating content once, distributing to everyone</p></li><li><p>Quarterly update cycles</p></li><li><p>Completion tracking as the success metric</p></li><li><p>Generic best practices that work &#8220;on average&#8221;</p></li></ul><p><strong>Predictive Enablement is built for:</strong></p><ul><li><p>Identifying the specific intervention each rep needs right now</p></li><li><p>Continuous learning from actual outcomes</p></li><li><p>Application and impact as the success metric</p></li><li><p>Personalized guidance based on what&#8217;s working today</p></li></ul><p>Different data models. Different feedback loops. Different paradigm.</p><div><hr></div><h2>Why Most &#8220;AI Enablement&#8221; Fail</h2><p>Everyone&#8217;s talking about AI for sales prep. Most &#8220;AI enablement&#8221; today is just a chat interface on top of stale content. That&#8217;s a faster straw, not a different model.</p><p><strong>The Hard Truth:</strong> Without proper architecture, you get AI slop at scale. Hallucinations. Generic advice. Confidently wrong recommendations.</p><blockquote><p><em>Real Predictive Enablement requires what I call a Sales Brain &#8212; a deep context graph. It connects every customer conversation to what actually worked, CRM patterns, product knowledge mapped to personas, evolving competitive intelligence, win/loss analysis tied to specific plays, and individual rep skill profiles based on real performance.</em></p></blockquote><p>The system must know:</p><ul><li><p><strong>Who the rep is meeting</strong> (not just company name, but stage, persona, context)</p></li><li><p><strong>What the rep already knows</strong> (not what they completed in the LMS, but what they&#8217;ve demonstrated in actual conversations)</p></li><li><p><strong>What&#8217;s worked recently</strong> (not best practices from two years ago, but what&#8217;s landing right now in similar situations)</p></li><li><p><strong>What&#8217;s changing</strong> (competitive moves, product updates, market shifts that affect this specific conversation)</p></li></ul><p>Then, synthesizing this into something consumable in five minutes - with guardrails. No net-new claims without human review. Internal sources cited with recency. Feedback loops from outcomes so the system learns what actually worked.</p><p>Without these constraints, you just get hallucinations at scale. And that&#8217;s not better than broadcast enablement - it&#8217;s just chaos, delivered faster.</p><div><hr></div><h2>How does Predictive Enablement Change Your Role</h2><p>In simple words, Predictive Enablement changes the enablement team&#8217;s role from content factory to orchestrator.</p><p>You&#8217;re not manually building every training module. Not manually updating every battle card. Not manually assessing every rep&#8217;s skill gaps.</p><p>Instead, you&#8217;re setting the conditions for the system to learn and adapt: defining what &#8220;good&#8221; looks like, curating the knowledge sources, identifying the moments that matter, reviewing what the system surfaces, and watching patterns across the field.</p><p>The system does all the heavy lifting -  synthesis, personalization, just-in-time delivery, and continuous learning from outcomes. You do the more important work -  judgment calls, quality control, strategic direction.</p><p>Same small team. Massively different leverage.</p><h4><strong>Imagining this at Scale?</strong></h4><p>Imagine a three-person enablement team orchestrating personalized coaching for 500 reps in real-time. The system identifies which reps have financial services meetings this week and generates relevant briefs based on what&#8217;s worked in similar deals. The enablement lead reviews them, refines positioning, and approves. Reps get exactly what they need when they need it.</p><p>Or imagine the system flagging that reps are struggling with a specific objection showing up in 23% of calls. The enablement lead records a five-minute coaching video on how to handle it. The system surfaces it to reps before their next calls, where that objection is likely, and tracks whether it actually helped.</p><blockquote><p><em>That&#8217;s the key: Predictive Enablement doesn&#8217;t just deliver content. It learns whether the intervention worked. It&#8217;s a self-evolving system.</em></p></blockquote><div><hr></div><h2><strong>How We Are Solving This at AmpUp.ai</strong></h2><p>At <a href="https://www.ampup.ai/">AmpUp.ai,</a> we have spent more than a year solving this problem and built Predictive Enablement into the workflow for every rep with Atlas, our AI Sales Coach + Chief of Staff.</p><p><strong>Meeting Prep Agent: </strong>Atlas knows everything about your prospect before you walk in. Account history, likely objections, stakeholder mapping, and winning plays&#8212;delivered automatically for each conversation. Sarah&#8217;s 45 minutes of research in seconds.</p><p><strong>Practice Partner Agent: </strong>Atlas role-plays your toughest objections until you nail them. Trained on your actual buyers and their objections, it helps reps build muscle memory for winning before high-stakes calls.</p><p><strong>Call Review Agent:</strong> Atlas catches what you missed and turns it into your next win. 100% coaching coverage. Every call reviewed, every rep coached, every moment captured. The system learns what worked and updates the model.</p><p>Together, they create that continuous loop: predict what you need, coach in the moment, learn what worked, and get better over time.</p><p><strong>That&#8217;s Predictive Enablement in practice and that&#8217;s what we&#8217;re building at AmpUp.ai &#8212; happy to share what we&#8217;ve learned.</strong></p><div class="pullquote"><p>The world has changed. Product cycles are measured in weeks. Competitive landscapes shift monthly. AI makes everything move faster.</p><p>You can&#8217;t solve this with better straws. You can&#8217;t solve it with more enablement headcount. You need a model built for the world we&#8217;re in.</p><p><strong>How often does your team update battle cards &#8212; weekly, monthly, or quarterly?</strong> I&#8217;m genuinely curious whether the three-month cycle is universal or if some companies have cracked faster content velocity. Drop a comment. I&#8217;d love to hear how different orgs are tackling this.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The System That Makes Your Best Reps Look Like Kindergarteners]]></title><description><![CDATA[Same week, different conversations.]]></description><link>https://amit.ampup.ai/p/the-system-that-makes-your-best-reps</link><guid isPermaLink="false">https://amit.ampup.ai/p/the-system-that-makes-your-best-reps</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Fri, 30 Jan 2026 14:43:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oQx_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oQx_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oQx_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!oQx_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!oQx_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!oQx_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oQx_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oQx_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!oQx_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!oQx_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!oQx_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F271cc0b2-384a-46fa-8239-2a74b3970213_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Same week, different conversations.</p><p>Last week, I watched Sarah - an enterprise AE prepare for a customer meeting like a chess grandmaster. She started with industry research, then company deep-dive, persona mapping, hypothesis building, and scenario planning.</p><p><em>&#8220;Great salespeople do the work outside the meeting,&#8221;</em> she told me. <em>&#8220;You&#8217;re anticipating anything that could happen.&#8221;</em></p><p>That same week, I heard three different sales leaders - A VP at a growth-stage company, a CRO at a late-stage startup, a GTM exec at a $25B software giant. All of them used nearly the exact same terms to describe their teams:</p><p><em>&#8220;Kindergarteners who need everything spoon-fed.&#8221;</em></p><p><em>&#8220;Coin-operated. Won&#8217;t do anything unless there&#8217;s immediate benefit.&#8221;</em></p><p><em>&#8220;No intrinsic motivation to learn.&#8221;</em></p><p>But here&#8217;s the thing: Both the scenarios can&#8217;t be true at the same time, but somehow they are.</p><div><hr></div><h2>The Analogy: The Hospital That Doesn&#8217;t Believe in Diagnosis</h2><p>I&#8217;ve spent the last few months talking to a handful of sales leaders across growth-stage and enterprise companies. A strange pattern keeps showing up. Different markets, different products, different stages, but very similar enablement problems.</p><p>A GTM executive at that $25B enterprise software company (about 1,800 sellers; 3,500 field engineers) gave me the analogy that explained everything:</p><p><em>&#8220;We&#8217;re running enablement like a hospital that doesn&#8217;t believe in diagnosis. Twenty percent of patients struggle with one issue, so everyone gets that treatment. Thirty percent have something else? Everyone gets that treatment too. We keep adding medications without ever asking if THIS person needs THIS thing. The mortality rate stays high and we blame the patients.&#8221;</em></p><p>He calls it <strong>Peanut Butter Enablement </strong>- Basically spread everything across everyone and hope something sticks.</p><p>What really struck me is this company has all the right tools on papers:conversation intelligence, AI coaching, an Learning Management System(LMS), and a dedicated enablement team. And yet, as he put it <em>&#8220;We genuinely don&#8217;t know who&#8217;s good at what.&#8221;</em></p><p>Last year their CRO spotted a critical gap. Their sellers were great talking to CIOs but couldn&#8217;t speak to line-of-business executives. So they invested/spent eighteen months building industry enablement. They partnered with every vertical team, created comprehensive materials, and integrated an AI coaching platform.</p><p>Then they mandated the training for all 1,800 people.</p><p>After all that work, they learned something uncomfortable.</p><p> <em>&#8220;Many of them were already good at industry selling. We just didn&#8217;t know who.&#8221;</em></p><p>Think about Math for a second. If even 30% of those reps were already competent at industry selling, that&#8217;s 540 people who spent hours in training they didn&#8217;t need. Meanwhile, the 1,260 who actually needed help got the exact same generic content as everyone else. No personalization. No targeting. Just a very expensive hope that improvement would average out.</p><p>And what surprised me most is that this isn&#8217;t just an enterprise problem. The companies with the most resources are just as lost as the startups with spreadsheets.</p><div><hr></div><h2>What Happens When Your System Can&#8217;t Diagnose</h2><p>Here&#8217;s a test: Without looking at quota attainment. Without asking managers to guess Just from observing behavior alone, can you tell me right now who&#8217;s in your team is struggling with:</p><ul><li><p>Deep discovery versus surface-level questions?</p></li><li><p>Getting to economic buyers versus stuck with champions?</p></li><li><p>Multithreading versus single-threading deals?</p></li><li><p>Navigating procurement versus folding on first pushback?</p></li><li><p>Industry conversations versus generic tech talk?</p></li></ul><p>If you can&#8217;t answer these questions with confidence, you&#8217;re not really measuring sales skills. You&#8217;re measuring whether the sales reps still trust that enablement will help them.</p><p><strong>~~~~</strong></p><p><strong>How Reps Learn When Systems Don&#8217;t Help Them</strong></p><p>Here&#8217;s what Sarah has learned the hard way (the enterprise AE I watched):</p><p>Her Tuesday looks like this:</p><ul><li><p><strong>9am</strong> negotiation meeting with an enterprise healthcare account.</p></li><li><p><strong>11am </strong>first discovery call with a fintech prospect.</p></li><li><p><strong>1pm</strong> cold calling block across mixed industries.</p></li><li><p><strong>3pm</strong> POC requirements review for retail.</p></li><li><p><strong>4pm</strong> customer success check-in with manufacturing.</p></li></ul><p>Five different contexts, five different companies. She needs to constantly do the context-switching between verticals, personas, and deal stages. And after every call, she is updating the CRM, sending the follow up emails, briefing internal stakeholders, and prepping for tomorrow&#8217;s meetings.</p><p>And in this chaos, Sarah  learned a few things: battle cards go stale in three months. The LMS returns 1,500 unranked results when you search for anything useful. Role-play feels divorced from actual work. Training gets built for some mythical average rep who doesn&#8217;t exist.</p><p>So Sarah builds workarounds. She does her own research, finds top performers and asks them directly. She skips the official training because it&#8217;s failed her before.</p><p>To a manager who doesn&#8217;t see this work in the background(lets call it the invisible work), Sarah might look like someone going rogue, not following the process.But Sarah told me: <em>&#8220;I love tools that help me get better. I&#8217;d use them all the time.I want </em>AI coaching, role-plays, and personalized feedback.</p><p>She&#8217;s not resistant to learning. She&#8217;s just resistant to wasting time on things that don&#8217;t help.</p><p>That&#8217;s not kindergartener behavior. That&#8217;s sophisticated pattern-matching from someone who learned the system doesn&#8217;t support her.</p><div><hr></div><h2>When Coaching Becomes Random, Fear Becomes Rational</h2><p>I&#8217;ve been selling for the first time in my career this year. The thing that surprised me most is that the biggest blocker isn&#8217;t knowledge, It&#8217;s actually fear.</p><p>Fear shows up camouflaged:</p><ul><li><p><em>&#8220;I don&#8217;t have the vertical expertise&#8221;</em> which actually means fear of looking stupid</p></li><li><p><em>&#8220;I was being nice to the customer&#8221;</em> which actually means fear of losing the relationship</p></li><li><p><em>&#8220;I didn&#8217;t feel comfortable pushing harder&#8221;</em> which actually means fear of their reaction</p></li></ul><p>This is where Sarah&#8217;s behavior starts to make even more sense. For instance, Sarah&#8217;s real advantage isn&#8217;t that she&#8217;s smarter than the other reps. It&#8217;s that she has confidence. She can qualify hard, get to power players, ask uncomfortable questions, and build champions who do the work for her even when she&#8217;s not in the room.</p><p>But that confidence came from somewhere. Early wins that build momentum. Good coaching at her first company. Safe practice environment where failure didn&#8217;t cost deals and doesn&#8217;t invite judgement.</p><p>Here&#8217;s the connection most sales leaders miss: when systems can&#8217;t diagnose problems, coaching becomes random. Random coaching often creates fear because reps can&#8217;t predict what &#8220;good&#8221; looks like.</p><p>And when reps stop trusting the system, something else quietly takes over: self-protection.</p><p>So they get cautious.</p><p>Generic enablement doesn&#8217;t address much when the real barrier is emotional. AI role-plays without context don&#8217;t build confidence. And battle cards don&#8217;t help when you&#8217;re scared to use them.</p><p>What looks like &#8220;no intrinsic motivation&#8221; from outside is actually learned helplessness from systems that promised transformation and delivered theater.</p><div><hr></div><h2>How System Failures Turn Into A Death Spiral</h2><p>When enough reps start operating in self-protection mode, the problem stops being individual and becomes organizational. Each failed tool compounds the diagnosis problem and the rep starts believing: it. &#8220;<em>maybe I&#8217;m the problem.&#8221;</em></p><p>Let&#8217;s take a look at how it pans out in reality.</p><p>That growth-stage VP? His reps are using Claude for call coaching. It&#8217;s a smart idea, but adoption is &#8220;inconsistent&#8221; because it&#8217;s not official. Only the experimenters found it.</p><p>The enterprise company? It has five different tools that don&#8217;t integrate. The CRO says: </p><p><em>&#8220;We spent a boatload. If someone gave us one platform that actually worked, there&#8217;s no shortage of dollars.&#8221;</em></p><p>But leadership hesitates because reps don&#8217;t make the most of the tools provided to them and makes it difficult for them to justify the investments.</p><p>And reps don&#8217;t use tools because they&#8217;ve been burned too many times. The cycle repeats and reinforces itself. Round and Round it goes.</p><p>If you can&#8217;t diagnose, you can&#8217;t really enable. You can only broadcast. And broadcasting is how organizations end up blaming individuals for outcomes the system quietly created.</p><blockquote><p><em><strong>The label sticks: kindergarteners.</strong></em></p></blockquote><p></p><div><hr></div><h2>The Uncomfortable Truth: It&#8217;s Not Talent, It&#8217;s Luck.</h2><p>Here&#8217;s the part I often think about: Elite performers aren&#8217;t a different species.They&#8217;re often the people the system happened to work for in their early careers. They tasted early wins that built confidence before fear could calcify. They got good coaching when they started (rare, because managers are stretched). They had time to develop habits before pressure ramped up.</p><p>Everyone else? They get generic training, tools &#8220;on the side,&#8221; outdated content, and managers too busy to help them.</p><p>The difference isn&#8217;t talent. It&#8217;s luck. It&#8217;s the system&#8217;s randomness that happened to break in their favor. And that should bother you. Because here&#8217;s what a system with actual diagnosis would reveal:</p><p>Let&#8217;s go back to Sarah&#8217;s preparation ritual at the beginning of this post. That&#8217;s not just her being exceptional. That&#8217;s a learnable behavior pattern - remaining on top of the Industry research, company analysis, hypothesis building, and scenario planning. Those are discrete and coachable skills that you could identify and measure</p><p>But to enable this, you&#8217;d need to know who&#8217;s already doing them and who isn&#8217;t. Right now, you&#8217;re giving Sarah the same &#8220;how to prepare for meetings&#8221; training as to the rep who shows up cold to every call. One of them is wasting their time and the other is developing bad habits because they&#8217;re not being pushed.</p><p>Neither outcome is good.</p><div><hr></div><h2>What to Actually Do About this</h2><p>I&#8217;m not saying hire better people or invest in more tools.I&#8217;m saying build systems that can diagnose.</p><p>Stop identifying the best rep based on quota attainment. Stop asking managers to estimate who&#8217;s good at what. Stop giving everyone everything and hoping it sticks.  <strong>Stop the</strong> <strong>Peanut Butter Enablement</strong>.</p><p>Instead, start with something simple: track one behavioral signal this week. Not quota attainment. Not activity metrics. Pick one skill from that test above (multithreading, deep discovery, executive access) and just watch who actually does what it in their calls.</p><p>Don&#8217;t build a dashboard yet. Don&#8217;t buy a tool. Just notice the pattern.</p><p>That&#8217;s a diagnosis. Everything else follows from there.</p><p>The technology exists to do this at scale now. But until we deploy it properly, we&#8217;ll keep running hospitals that don&#8217;t believe in diagnosis. We&#8217;ll watch the mortality rate stay high and blame the patients. And we will keep on producing Kindergarteners and make the smart, competent, and capable people end up looking like they need to be spoon-fed.</p><div><hr></div><p><strong>Where does enablement break first in your org?</strong></p><blockquote><p><em>I&#8217;m collecting stories from sales leaders. Drop a comment or <a href="https://www.linkedin.com/in/amit-prakash-50719a2/">DM me</a> on LinkedIn.</em></p></blockquote><p><em>Next week: Why even companies who solve diagnosis still can&#8217;t make enablement work. You can&#8217;t feed an ocean through a straw, no matter how good your straw is.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Top Sales Teams Will Never Be Automated ]]></title><description><![CDATA[(And How AI Multiplies Them Instead)]]></description><link>https://amit.ampup.ai/p/why-top-sales-teams-will-never-be</link><guid isPermaLink="false">https://amit.ampup.ai/p/why-top-sales-teams-will-never-be</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Fri, 23 Jan 2026 16:10:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f341e0f5-dc69-4cc1-942c-49ca9edfe322_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Automation Vs Augmentation: Why Everyone Gets This Wrong</h2><p>The AI world has split into two camps. One says augment humans. The other says automate everything.</p><p>However, the debate itself is a trap as it treats augmentation and automation as competing strategies when one is actually the foundation for the other. This notion is built on an incorrect fundamental understanding of how AI actually learns.</p><p>Any organization often faces a big strategic choice on how to use AI:</p><ul><li><p>Conservation Path - A safe incremental approach that uses AI to augment workers and keeps humans in the loop</p></li><li><p>Visionary Path - Go bold and automate entire functions that replaces humans and slashes costs</p></li></ul><p>If you want AI to automate complex work, you need augmentation first. Not because it&#8217;s safer. Because that&#8217;s how AI actually learns. And no, you can&#8217;t skip steps.</p><div><hr></div><p><em><strong>&#8220;Automation is what happens after you&#8217;ve turned tacit judgment into transferable knowledge.&#8221;</strong></em></p><div><hr></div><h2>How AI Actually Learns (The Inconvenient Truth)</h2><p>In practice, AI learns complex work through three channels:</p><p><strong>1. From existing documentation and datasets</strong></p><p>Medical textbooks, research papers, documented procedures. Billions of lines of open source code with patterns and examples. This works when knowledge is already formalized and available at scale. It&#8217;s good for pattern matching, code generation from documented examples, and answering questions from textbooks.</p><p><strong>2. From its own trial and error</strong></p><p>Playing millions of chess games against itself. Generating and testing variations until it figures out what works. This works when mistakes are cheap, feedback is instant, and failure has no real-world consequences.</p><p><strong>3. From observing expert humans</strong></p><p>Learning it from the expert itself. For instance - how a master physician reasons through ambiguous symptoms or how a skilled negotiator reads a room and adjusts strategy.</p><p>For most valuable work, you need a combination of #1 and #3. And #3 is usually the limiting factor. Let me explain why.</p><div><hr></div><h2>Why Documentation Alone Isn&#8217;t Enough</h2><p>Software development shows both the power and the limits of learning from documentation.</p><p>AI coding tools like Claude Code and Cursor work remarkably well because they trained on billions of lines of open source code. Common patterns, standard algorithms, framework usage, it&#8217;s all sitting there in GitHub repositories.It&#8217;s like learning to cook when you have access to every recipe ever written.</p><p>But even in the most documented domain, the most effective deployment of AI is still augmentation.</p><p>Developers make architectural decisions based on business requirements and handle novel problems that don&#8217;t match training patterns, and understand implicit requirements/ edge cases. The tools achieve 80%+ accuracy on standardized coding benchmarks precisely because those benchmarks test documented patterns. On truly novel problems, accuracy drops significantly.</p><p>Now look at domains where even less is documented.</p><p>Ask Tom Brady how he reads a defense pre-snap and he&#8217;ll give you some rules about safety positioning and linebacker depth. But the real magic is in ten thousand reps that taught him to see patterns he can&#8217;t fully articulate.</p><p>&#8220;We call this intuition. AI calls it missing training data.&#8221;</p><p><strong>Your top sales performers have the same invisible expertise.</strong></p><p>They know when a prospect&#8217;s &#8220;yes&#8221; is actually a &#8220;not now.&#8221; They read the room in a multi-stakeholder meeting, adjust their pitch mid-sentence, and have the sense when to push or pivot . They navigate complex buying committees, building relationships with the economic buyer while keeping the technical buyer engaged.</p><p>The expertise isn&#8217;t in databases. It lives in expert practitioners&#8217; heads as patterns accumulated over years, skills built over hundreds of deals, not written in playbooks. Your CRM documents what happened. Your top reps know what to notice before it happens. That&#8217;s the gap documentation can&#8217;t fill.</p><div><hr></div><h2>Why Trial and Error Fails in High-Stakes Domain</h2><p>&#8220;But wait,&#8221; you&#8217;re thinking, &#8220;can&#8217;t AI just learn by trying things and seeing what works?&#8221;</p><p>Sure. If you have unlimited time, money, and a high tolerance for catastrophic failure.</p><p>AlphaGo learned Go by playing itself 30 million times. Worked great. You know why? Because losing a game costs nothing. The stones don&#8217;t actually die. The territory doesn&#8217;t actually matter. It&#8217;s just bits flipping on a computer.</p><p>But you can&#8217;t let a car drive off a cliff 200 times to learn braking. You can&#8217;t let AI learn medicine by making mistakes on real patients.And you definitely can&#8217;t let it learn enterprise sales by burning through your pipeline.</p><p><strong>We let the AI run 500 discovery calls before it learned not to pitch product features before understanding the customer&#8217;s actual problem.</strong></p><p>Good news: it learned!</p><p>Bad news: your top 50 prospects are never returning your calls.</p><blockquote><p><em><strong>Simple rule: If failure is expensive, you don&#8217;t get to &#8220;let the model figure it out.&#8221;</strong></em></p></blockquote><p>The domains where automation would create the most value are precisely the domains where learning through trial and error would be criminally expensive. And in medicine, it would be literally criminal.</p><div><hr></div><h2>So How Does AI Actually Learn Complex Work?</h2><p>If AI can&#8217;t learn from documentation and can&#8217;t learn from trial and error (too expensive), there&#8217;s only one option left:</p><p>It has to learn from expert humans. But here&#8217;s the catch: expert knowledge is mostly tacit. Experts can&#8217;t fully articulate what they know. They&#8217;ve internalized patterns through years of experience that they execute intuitively without conscious reasoning.</p><p>Remember Tom Brady? The same thing happens with your top sales rep, your best diagnostician, your master technician. They know more than they can explain.</p><p>This is where organizational learning becomes essential. Most sales enablement teaches what to say, but top performers succeed because they know what to notice.</p><p>To teach AI complex tasks, you first need to:</p><ol><li><p>Observe experts in action across hundreds or thousands of real situations</p></li><li><p>Identify which behaviors actually drive outcomes (causal analysis, not just correlation)</p></li><li><p>Formalize those patterns so they can be taught</p></li><li><p>Validate the patterns work when transferred to others</p></li><li><p>Refine continuously based on results</p></li></ol><p>Notice something? This is the exact same process needed to teach other humans.</p><div><hr></div><h2>The Sequencing Becomes Inevitable</h2><p>Once you see this, the sequencing becomes obvious. </p><p>To automate medical diagnosis, you first need to extract diagnostic expertise from expert physicians. But once you&#8217;ve extracted it, you might as well use it to make average physicians better while you&#8217;re building toward automation.</p><p>In enterprise sales, the same logic applies. You need to identify how top performers  qualify, navigate politics, handle objections, create urgency. But those insights are valuable as augmentation permanently, because high stakes buyers prefer human counterparts.</p><p>When your top rep knows exactly when to involve the CFO or recognizes that a &#8220;pricing objection&#8221; is really a &#8220;we&#8217;re not convinced yet&#8221; objection, that expertise is worth millions.</p><p>Extract it, Formalize it, Scale it.</p><p>That&#8217;s not a bridge to automation. That&#8217;s your competitive moat.</p><p>Your top performers are closing deals today using expertise the rest of your team doesn&#8217;t have. Every day you wait to extract and scale - that knowledge is revenue left on the table.</p><div><hr></div><h2>What This Actually Means For Sales Leaders</h2><p>Sales leaders need to stop asking &#8220;should we augment or automate?&#8221;. Instead, they need to recognize there are two distinct patterns:</p><p><strong>Pattern 1: Augmentation as the Bridge</strong></p><p>This applies to domains like software development, customer support, and some operations. Here AI is deployed to help humans learn from top performers in order to extract/ formalize expertise and build the knowledge base. Generally, the economic value is retrieved at every stage with gradual increase in automation as trust and capability develop. This is a journey with a destination.</p><p><strong>Pattern 2: Augmentation as the destination : </strong>This is useful for high stakes domains like enterprise sales, strategic consulting, complex negotiations. Here AI is deployed to help reps learn from your top 10% performers, extract what makes them successful and scale that expertise across your team. This is what will define your long-term competitive edge.</p><p><strong>The difference:</strong></p><p><strong>In Pattern 1, augmentation is training wheels.</strong></p><p><strong>In Pattern 2, augmentation is the data flywheel.</strong></p><p><strong>~~~~~~~</strong></p><p><strong>For sales, the math is simple. </strong>Your top rep closes at 3x the team average. She reads buying committee dynamics, knows when to bring in solutions engineering, and anticipates procurement objections. That knowledge is worth millions&#8212;and  it&#8217;s trapped in her head.</p><p>AI can observe what she does differently, and coach the rest of your team in real-time. That&#8217;s not a stepping stone.. That&#8217;s a sustainable competitive advantage.</p><div><hr></div><h2>What The Data Actually Shows for the Sales Team</h2><p>The pattern is playing out across industries, but sales provides the clearest evidence:</p><p>Tools that help reps prepare, handle objections, and navigate complex deals create measurable impact. But full automation? Autonomous agent tools work well for high-volume SDR prospecting and qualification. They struggle in complex enterprise sales where deals involve multiple stakeholders, long cycles, and high-stakes negotiations.</p><p>The data backs this up. In recent enterprise pilots including a <strong>major automotive manufacturer and a leading data analytics platform</strong>, we witnessed:</p><ul><li><p>30-70% improvements in key sales metrics. One pilot showed a 30% lift in win rates.</p></li><li><p>Another identified four critical intervention points, each delivering 50-70% improvement in specific behaviors like objection handling and multi-threading.</p></li></ul><p>These weren&#8217;t lab conditions. These were live deals over 6-9 month sales cycles with deal sizes in the high six figures.</p><p>So, why does augmentation work where automation struggles? Humans spent 200,000 years learning to judge people and about 50 years learning to judge complex technology. When you&#8217;re committing $2Mn over three years,  your brain proxies the technology question through a people question: &#8220;Can I trust who&#8217;s selling me this?&#8221;</p><p>This isn&#8217;t a temporary limitation. It&#8217;s evolutionary wiring.</p><div><hr></div><h2>The Bottom Line For Your AI Strategy</h2><p>To formulate your AI strategy, it&#8217;s important to understand who your top performers are, what they do differently, how to observe those behaviors at scale, and how to identify what actually drives outcomes - then transferring that expertise and measuring whether it improves results.</p><p>These aren&#8217;t software questions. They&#8217;re organizational learning questions. And most &#8220;AI agent&#8221; roadmaps are really learning roadmaps pretending to be automation plans.</p><p>You can&#8217;t automate expertise you haven&#8217;t extracted and formalized. The extraction happens through observation, pattern identification, validation. That process creates immediate value.</p><p>For sales leaders, this has clear implications. For instance -</p><ul><li><p>Your top performers&#8217; expertise is your real competitive advantage</p></li><li><p>Extracting and scaling that expertise drives immediate revenue (30-70% improvements)</p></li><li><p>You&#8217;re not in a race against automation, you&#8217;re building sustainable advantages</p></li></ul><p>The companies that move first build moats competitors can&#8217;t cross.</p><p>So the question isn&#8217;t whether to automate or augment. It&#8217;s whether you&#8217;re extracting expertise systematically and applying it intelligently. Sequential beats direct. Not because it&#8217;s safer but because it&#8217;s the only path that actually works.</p><div><hr></div><p><em>At AmpUp, we&#8217;re proving this thesis in practice. Enterprise sales teams are seeing 30-70% improvements in win rates and deal velocity by extracting expertise from their top performers and scaling it across the team.</em></p><p><em>If you&#8217;re a sales leader wondering whether to augment or automate, the real question is simpler: What do your top 10% know that the rest of your team doesn&#8217;t? And how fast can you transfer it?</em></p><p><em>If you&#8217;re ready to stop theorizing and start measuring, let&#8217;s talk.</em></p><p><em><strong><a href="https://www.ampup.ai/demo">Book a conversation</a> here!</strong></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Training Can't Fix This (And What Can)]]></title><description><![CDATA[In Part 2, we explore why most sales teams keep relearning the same lessons and how learning velocity, not headcount, becomes the real competitive advantage.]]></description><link>https://amit.ampup.ai/p/why-training-cant-fix-this-and-what</link><guid isPermaLink="false">https://amit.ampup.ai/p/why-training-cant-fix-this-and-what</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Fri, 16 Jan 2026 16:17:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6450aed8-5504-4ea7-b7e6-80c4bccc3ac0_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is Part 2 of a two-part series. Read <strong><a href="https://amit.ampup.ai/p/the-invisible-11-second-moment-behind">Part 1: The Invisible 11-Second Moment Behind Most Stalled Deals</a></strong>  first.</em></p><div><hr></div><h2>The Preparation Cliff</h2><p>We scored every call on preparation quality (1-5 scale).</p><p>Here&#8217;s what data showed:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0v0H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0v0H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 424w, https://substackcdn.com/image/fetch/$s_!0v0H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 848w, https://substackcdn.com/image/fetch/$s_!0v0H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 1272w, https://substackcdn.com/image/fetch/$s_!0v0H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0v0H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png" width="1252" height="412" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:412,&quot;width&quot;:1252,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:48401,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/184781468?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0v0H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 424w, https://substackcdn.com/image/fetch/$s_!0v0H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 848w, https://substackcdn.com/image/fetch/$s_!0v0H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 1272w, https://substackcdn.com/image/fetch/$s_!0v0H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F546e45ab-1ae8-4894-a11e-ebd967707816_1252x412.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Excellent preparation (5/5) creates a 6.8x multiplier on deal outcomes.</p><p>But here&#8217;s the uncomfortable truth:</p><blockquote><p><em><strong>High-quality preparation is functionally impossible at scale.</strong></em></p></blockquote><p>To prepare well for one call takes 30-45 minutes.</p><ul><li><p>Research the tech stack</p></li><li><p>Map the competitive landscape</p></li><li><p>Anticipate objections</p></li></ul><p>When you&#8217;re running 6 calls a day, that math doesn&#8217;t work.</p><p><strong>6 calls &#215; 30-45 minutes = 240-270 minutes i.e. 4-4.5 hours of prep alone.</strong> Plus the calls themselves.</p><p>So preparation becomes triage.</p><p>You skim the website.</p><p>You glance at LinkedIn.</p><p>Hope for no surprise questions.</p><blockquote><p><em><strong>When prep takes 45 minutes, inconsistency isn&#8217;t a discipline problem. It&#8217;s a math problem.</strong></em></p></blockquote><p>And then they do.</p><p>Customer: &#8220;Have you worked with anyone in our space before?&#8221;</p><p>Rep: &#8220;I&#8217;m no real estate expert, but&#8230;&#8221;</p><p>The customer stops listening. Credibility drops instantly.</p><p>We found that in 40% of calls, the customer had to educate the rep on basic facts that 15 minutes of research would have covered.</p><p>This isn&#8217;t a discipline problem.</p><p><strong>It&#8217;s an infrastructure problem.</strong></p><div><hr></div><h2>What You Can Do (And Where It Still Breaks)</h2><p>The following actions materially improve outcomes. But they still depend on individual discipline and memory&#8212;which means they don&#8217;t scale reliably.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DBJi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DBJi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 424w, https://substackcdn.com/image/fetch/$s_!DBJi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 848w, https://substackcdn.com/image/fetch/$s_!DBJi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 1272w, https://substackcdn.com/image/fetch/$s_!DBJi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DBJi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png" width="1146" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:1146,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:145560,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/184781468?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DBJi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 424w, https://substackcdn.com/image/fetch/$s_!DBJi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 848w, https://substackcdn.com/image/fetch/$s_!DBJi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 1272w, https://substackcdn.com/image/fetch/$s_!DBJi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe13b0388-44ed-4aa9-b9d8-076365d7037b_1146x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These changes reduce friction and improve consistency.</p><p>But they still rely on reps remembering to do the right thing and best responses living in people&#8217;s heads.</p><p><strong>We don&#8217;t have a training problem. We have a retention problem.</strong></p><p>You can train your reps on these behaviors today. They&#8217;ll remember them for a week. By next month, the forgetting curve will wipe it out.</p><p>The scalable question isn&#8217;t how to train better&#8212;it&#8217;s how to stop relying on human memory entirely.</p><div><hr></div><h2>Sarah vs. Marcus</h2><p>Sarah closed $1.4M last year.</p><p>Marcus closed $425K. Same manager, same product.</p><p>The gap isn&#8217;t talent. It&#8217;s pattern recognition.</p><p>Sarah knows what supply chain VPs care about because she&#8217;s had 50 conversations with them. She knows what &#8220;we need to think about it&#8221; means. She can read the room because she&#8217;s been in this room 200 times.</p><p>Marcus has been in this room 8 times. He&#8217;s guessing.</p><p>The old answer was:</p><blockquote><p><em><strong>&#8220;Marcus needs more reps.&#8221;</strong></em></p></blockquote><p>True, but unscalable. Useless. You can&#8217;t compress 18 months of learning into a workshop.</p><p>Every sales organization already has its playbook. It&#8217;s just scattered across thousands of forgotten conversations.</p><p>The question isn&#8217;t whether that knowledge exists. It&#8217;s whether you can access it before the next call.</p><div><hr></div><h2>Here&#8217;s How It Works</h2><p><strong>[9:15 AM]</strong> Sarah closes a deal using a specific reframe with a logistics VP. The system recognizes the pattern. It tags the moment, isolates the winning variable.</p><p><strong>[2:30 PM]</strong> Marcus has a similar call with a different logistics VP. Instead of making him guess, the system surfaces: &#8220;Sarah faced this exact objection this morning. Here&#8217;s what worked. Practice it twice before your call.&#8221;</p><p>Marcus practices it. He enters the call with Sarah&#8217;s experience in his back pocket.</p><p>The deal advances.</p><blockquote><p><em><strong>Sarah&#8217;s 18 months just became Marcus&#8217;s 45 minutes.</strong></em></p></blockquote><div><hr></div><h2>The Infrastructure Gap</h2><p>When we analyzed thousands of meetings across dozens of B2B companies, we found three systemic gaps that training alone cannot fix:</p><h4>84% &#8212; Preparation Failure</h4><p>Average prep score: 2.0/5. Only 5% achieve excellent prep. The math doesn&#8217;t work when good prep takes 30-45 minutes per call.</p><h4>77% &#8212; Objections Unresolved</h4><p>Only 780 of 3,300 objections fully resolved. 1,850 partially addressed, 680 dismissed. They don&#8217;t go away&#8212;they metastasize.</p><h4>41% &#8212; Knowledge Gap Damage</h4><p>Over 1,000 meetings where customers had to educate reps on their own industry. Credibility lost in seconds.</p><p><strong>These aren&#8217;t training problems. They&#8217;re infrastructure problems.</strong></p><div><hr></div><h2>Training vs Infrastructure</h2><p><strong>Training says:</strong> &#8220;Remember to ask about quantified pain.&#8221;</p><p><strong>Infrastructure says:</strong> &#8220;The last three deals in this vertical stalled because we didn&#8217;t quantify pain. Here&#8217;s the question that worked.&#8221;</p><div><hr></div><p><strong>Training says:</strong> &#8220;Prepare better.&#8221;</p><p><strong>Infrastructure says:</strong> &#8220;Here&#8217;s everything you need to know about this call, synthesized from 50 similar conversations, ready in 3 minutes.&#8221;</p><div><hr></div><p><strong>Training says:</strong> &#8220;Marcus will figure it out.&#8221;</p><p><strong>Infrastructure says:</strong> &#8220;Here&#8217;s what Sarah already knows.&#8221;</p><blockquote><p><em><strong>Training hopes. Infrastructure enables.</strong></em></p></blockquote><div><hr></div><h2>Organizational Learning Velocity</h2><blockquote><p><em><strong>The speed at which you can identify what&#8217;s working, prove why it&#8217;s working, and transfer it to everyone who needs it before the insight goes stale.</strong></em></p></blockquote><p>Most sales organizations optimize for hiring velocity&#8212;how fast can we add more Sarahs? But Sarah took 18 months to build her pattern recognition. You can&#8217;t hire fast enough to outrun that math.</p><p>Today&#8217;s competitive advantage doesn&#8217;t lie in adding headcount. It lies with the teams that optimize for learning velocity.</p><p><strong>If you&#8217;re still running on quarterly training cycles and forecast reviews, you&#8217;re not competing anymore.</strong></p><p><strong>You&#8217;re just watching the gap widen.</strong></p><p>The gap between your best rep and everyone else isn&#8217;t about talent. It&#8217;s about accumulated pattern recognition that lives in Sarah&#8217;s head and nowhere else.</p><p>Until now, the only way to transfer that knowledge was hope. Hope Marcus watches the recordings. Hope he absorbs the lessons.</p><p><strong>But hope doesn&#8217;t scale.</strong></p><p><strong>Systems do.</strong></p><div><hr></div><p>If you believe your sales team should compound its learning instead of relearning the same lessons every quarter, that&#8217;s exactly what we&#8217;re building at AmpUp.</p><p><strong><a href="https://ampup.ai/">See how we clone your best rep &#8594;</a></strong></p><p><strong><a href="https://www.linkedin.com/in/amit-prakash-50719a2/">Connect on LinkedIn &#8594;</a></strong></p><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Invisible 11-Second Moment Behind Most Stalled Deals]]></title><description><![CDATA[Why cognitive overload silently kills revenue before the call even begins]]></description><link>https://amit.ampup.ai/p/the-invisible-11-second-moment-behind</link><guid isPermaLink="false">https://amit.ampup.ai/p/the-invisible-11-second-moment-behind</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Thu, 15 Jan 2026 17:34:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!72aF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!72aF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!72aF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!72aF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!72aF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!72aF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!72aF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg" width="1456" height="618" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:618,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:640128,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/184676253?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!72aF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 424w, https://substackcdn.com/image/fetch/$s_!72aF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 848w, https://substackcdn.com/image/fetch/$s_!72aF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!72aF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13297688-c803-4ac2-b527-8682fcca3d09_1584x672.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Countdown Before The Call</h2><p>You have 18 minutes before the call.</p><p>You pull up LinkedIn. Three people joining.</p><ul><li><p>VP of Operations you&#8217;ve talked to twice.</p></li><li><p>Director of IT you&#8217;ve never met.</p></li><li><p>Someone from finance.</p></li></ul><p>You open their website. Skim the About page. They&#8217;re a mid-market manufacturer. They make... industrial components?</p><p>You&#8217;re not entirely sure.</p><p>You check your CRM. Last call notes:</p><p>&#8220;Strong interest. Wants to see demo of reporting features.&#8221;</p><h4><em>11 minutes now.</em></h4><p>You open your deck. Find the reporting slides. Think about what questions they might ask. Hope the IT director doesn't go deep on security. You handled that objection brilliantly in training six months ago, but right now you can't remember if your data retention policy is 90 days or 180 days, or what exactly "encryption at rest" means to a non-technical buyer.</p><h4><em>3 minutes.</em></h4><p>You join the call. You&#8217;re not unprepared, exactly. You know your product. You&#8217;ve done this before. You&#8217;ll figure it out as you go.</p><p>And you do.</p><p>You demo the features. The conversation flows. The questions get answered. Everyone seems engaged, everything looks good.</p><p>You suggest a follow-up:</p><p>&#8220;Let&#8217;s touch base next week.&#8221;</p><p>They agree.</p><p>You hang up feeling okay. Not great, not terrible. Okay.</p><h4><em>Three weeks later.</em></h4><p>The deal is stuck with legal.</p><p>Four weeks after that, it&#8217;s dead.</p><p>You&#8217;ll never know exactly why.</p><p><strong>&#8220;But I do.&#8221;</strong></p><p>Because I&#8217;ve watched this exact pattern play out thousands of times across dozens of companies.</p><p>And what kills the deal isn&#8217;t what happened on the call.</p><blockquote><p><em><strong>It&#8217;s what didn&#8217;t happen in those 18 minutes before it.</strong></em></p></blockquote><div><hr></div><h2>The Invisible Patterns Most Sales Teams Miss</h2><p>We&#8217;ve been analyzing sales calls for four months. Thousands of conversations across dozens of B2B companies.</p><p>Every call transcribed. Every moment tagged. Every objection categorized.</p><p>I thought I knew what we&#8217;d find. I spent a decade at ThoughtSpot watching sales teams scale from zero to over $150M. I thought I understood why deals fail.</p><p>I was wrong.</p><p>Deals don&#8217;t die in pipeline reviews. They die in 11-second moments.</p><p>The customer says something. The rep hears it. But doesn&#8217;t process it.</p><p>And three weeks later when the deal stalls, nobody can trace it back to that moment.</p><p>Until you can. Until you&#8217;ve tagged hundreds of instances of the same mistake across different reps, different calls, different customers.</p><p>Here&#8217;s what we found: </p><blockquote><p><em><strong>Deals die not from the things reps do wrong; but from the things they walk right past.</strong></em></p></blockquote><div><hr></div><h2>The $2 Million Nobody Heard</h2><h4>A Real Call, A Missed Signal</h4><p>Call #47. Enterprise SaaS deal. Third meeting.</p><p>Eleven minutes in, the customer says:</p><p>&#8220;Yeah, we&#8217;re probably losing about two million a year to this inefficiency. Maybe more.&#8221;</p><p>Two million. They just handed the rep the number that makes price irrelevant.</p><p>The rep responds:</p><p>&#8220;Got it. Let me show you how the platform works.&#8221;</p><p>Then they proceed to demo features for the next twenty-three minutes.</p><p>Later, when pricing comes up and the customer says &#8220;That seems expensive,&#8221; the rep crumbles and immediately talks about discounts instead of value.</p><p>Why?</p><p>It&#8217;s not incompetence. <strong>It&#8217;s cognitive load.</strong></p><p>In a live call, the human brain is juggling the demo, the clock, three video feeds and the next objection.</p><p>Working memory has four slots. By the time &#8216;2M&#8217; arrives, those slots are already full.</p><blockquote><p><em><strong>The rep heard the signal. The brain just didn&#8217;t save it.</strong></em></p></blockquote><p>The signal vanished.</p><div><hr></div><h2>What the Rep Should Have Said Instead</h2><p>When the customer said -</p><p>&#8220;We are probably losing about two million a year to this inefficiency. Maybe more.&#8221;</p><p>The right response wasn&#8217;t a demo.</p><p>It was a pause followed by a more prepared response.</p><p>Here&#8217;s what that moment could have sounded like:</p><p><strong>&#8220;Did you say you&#8217;re losing roughly $2M per year? How are you calculating that number? If we could eliminate even half of that, what would that mean for your team?&#8221;</strong></p><p>Instead of defending price later, the rep would be comparing against a $2M problem and not a line item in a budget.</p><p>This is what shifts the conversation from features to impact.</p><blockquote><p><em><strong>That 11-second moment changes the entire trajectory of the deal.</strong></em></p></blockquote><div><hr></div><h2>Why This Works</h2><p>This isn&#8217;t about being clever in the moment.</p><p>It&#8217;s about freeing up enough mental bandwidth to recognize the signal when it appears and having a simple way to capture it before it disappears.</p><p>This is where preparation matters. Not because it&#8217;s &#8220;nice to have,&#8221; but because preparation offloads cognitive memory before the call starts.</p><p>If the sales rep had known the customer&#8217;s likely pain points beforehand, they wouldn&#8217;t have been scrambling.</p><p>They would have had the mental bandwidth to hear the signal and lock it in.</p><blockquote><p><em><strong>But they didn&#8217;t. Because in modern sales, preparation is broken.</strong></em></p></blockquote><div><hr></div><p><strong>Coming Up Next:</strong> </p><p><strong>Part 2: Why Training Can&#8217;t Fix This (And What Can)</strong></p><p>Discover why even the best training programs can&#8217;t solve systemic infrastructure problems&#8212;and what actually works to clone your top performers.</p><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Context is All You Need: The AI 2025 Story]]></title><description><![CDATA[2025 wasn't the year AI models got dramatically better. It was the year we learned how to use them properly. Models are increasingly commoditized&#8212;context prompts, retrieval, memory, reasoning, and too]]></description><link>https://amit.ampup.ai/p/context-is-all-you-need-the-ai-2025</link><guid isPermaLink="false">https://amit.ampup.ai/p/context-is-all-you-need-the-ai-2025</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Tue, 30 Dec 2025 16:27:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7f93910b-6eda-460d-a040-5107cfd446bb_1100x220.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>TL;DR</strong></h2><p>2025 was not just the year when AI models got dramatically better; it was also the year we learned how to use them properly.</p><ul><li><p>Models are increasingly commoditized</p></li><li><p>Context prompts, retrieval, memory, reasoning, tools became the real leverage</p></li><li><p>Reasoning models, agents, and memory systems are all ways of generating better context</p></li><li><p>Fine-tuning fell out of favor; context engineering outperformed it</p></li><li><p>Sales is one of the hardest real-time context problems, which makes it the right proving ground to solve</p></li><li><p><strong><a href="https://ampup.ai/">AmpUp.ai </a></strong>is fundamentally a context engineering platform, not a voice or agent company</p></li></ul><div><hr></div><h2><strong>Introduction</strong></h2><p>Three years ago, I wrote a piece <strong><a href="https://amit.thoughtspot.com/p/what-is-chatgpt-and-how-does-it-work">demystifying the conceptual stack behind ChatGPT </a></strong>. The response revealed something important: a massive group of practitioners was hungry for first-principles explanations of the shifts driving our industry.</p><p>This piece follows a different objective.</p><p>It&#8217;s about making sense of what happened in AI over the last twelve months, a year that fundamentally shifted where the action is.</p><p>If I had to summarize 2025 in one sentence:</p><blockquote><p><em><strong>We stopped obsessing over the model and started obsessing over context.</strong></em></p></blockquote><p>If you&#8217;re still thinking about AI primarily in terms of models, you&#8217;re already behind. The teams that trained the best models in 2023 are not the same teams winning in 2025.</p><p>The skill that mattered then (training) is not the skill that matters now (orchestration).</p><p>This piece explains:</p><ul><li><p>why that shift happened,</p></li><li><p>what replaced the old paradigm, and</p></li><li><p>what it means for anyone building with AI.</p></li></ul><p>To understand why context became the bottleneck, we need to start with the moment models stopped being scarce.</p><div><hr></div><h2><strong>The DeepSeek Wake-Up Call</strong></h2><p>January 2025. A Chinese AI lab called DeepSeek releases a model that matches frontier performance at a fraction of the assumed cost. The AI community had internalized a narrative: competitive models required billions of dollars, massive GPU clusters, resources exclusive to a handful of companies.</p><p>DeepSeek broke that narrative.</p><p>The key papers are <strong><a href="https://arxiv.org/abs/2405.04434">DeepSeek-V2 </a></strong>(May 2024) and <strong><a href="https://arxiv.org/abs/2412.19437">DeepSeek-V3 </a></strong>(December 2024). The technical ideas illustrate a broader point: cleverness can substitute for brute force.</p><p>Two ideas mattered:</p><ul><li><p><strong>Multi-Head Latent Attention (MLA)</strong> attacked the memory bottleneck by compressing key/value vectors into a lower-dimensional latent space, reducing KV cache memory by over 90% while preserving quality. DeepSeek compressed these vectors into a lower-dimensional latent space, learning during training what information is essential to preserve. If original KV vectors are dimension d and you compress to dimension c where <code>c &lt;&lt; d</code>, you reduce memory by a factor of d/c. They reported 90%+ compression while maintaining quality.</p></li><li><p><strong>Mixture of Experts (MoE)</strong> attacked the compute bottleneck by activating only a subset of parameters per token. <strong><a href="https://arxiv.org/abs/2412.19437">DeepSeek-V3 </a></strong>has 671B parameters, but uses only ~37B per token at inference.</p></li></ul><p>The combination meant DeepSeek trained competitive models for under $6 million.</p><p>This raised an uncomfortable question: <strong>if the model itself is commoditizing, what actually matters?</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fays!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fays!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 424w, https://substackcdn.com/image/fetch/$s_!Fays!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 848w, https://substackcdn.com/image/fetch/$s_!Fays!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 1272w, https://substackcdn.com/image/fetch/$s_!Fays!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fays!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png" width="1258" height="484" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:484,&quot;width&quot;:1258,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:344993,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Fays!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 424w, https://substackcdn.com/image/fetch/$s_!Fays!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 848w, https://substackcdn.com/image/fetch/$s_!Fays!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 1272w, https://substackcdn.com/image/fetch/$s_!Fays!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F44fd3694-bbdb-498b-a54e-5e20f9166007_1258x484.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CxNh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CxNh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 424w, https://substackcdn.com/image/fetch/$s_!CxNh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 848w, https://substackcdn.com/image/fetch/$s_!CxNh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 1272w, https://substackcdn.com/image/fetch/$s_!CxNh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CxNh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png" width="630" height="559" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:559,&quot;width&quot;:630,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:83827,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CxNh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 424w, https://substackcdn.com/image/fetch/$s_!CxNh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 848w, https://substackcdn.com/image/fetch/$s_!CxNh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 1272w, https://substackcdn.com/image/fetch/$s_!CxNh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d12d7af-c618-424e-96cd-00a6f53fb681_630x559.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The model stays the same. The context gets better.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NJWt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NJWt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 424w, https://substackcdn.com/image/fetch/$s_!NJWt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 848w, https://substackcdn.com/image/fetch/$s_!NJWt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 1272w, https://substackcdn.com/image/fetch/$s_!NJWt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NJWt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png" width="1230" height="396" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:396,&quot;width&quot;:1230,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61990,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NJWt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 424w, https://substackcdn.com/image/fetch/$s_!NJWt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 848w, https://substackcdn.com/image/fetch/$s_!NJWt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 1272w, https://substackcdn.com/image/fetch/$s_!NJWt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffbda7ea-820f-45c7-9f41-575904a6e94c_1230x396.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Most teams are still at Level 2. The frontier has moved.</p><p><em>Once models commoditized, the real innovation shifted to everything wrapped around them.</em></p><div><hr></div><h2><strong>Breakthroughs of 2025</strong></h2><h3><strong>1. Reasoning Models: The Model Talking to Itself</strong></h3><p>The first major theme of 2025: reasoning models. OpenAI&#8217;s o1 arrived in late 2024. DeepSeek-R1 followed. Anthropic, Google, and others shipped their own.</p><p>The core idea: instead of outputting an answer immediately, the model generates a chain of reasoning first, then produces a final response based on that reasoning.</p><p>The technical roots of this idea stretch back several years but came together decisively in 2025.</p><ul><li><p><strong>Chain-of-Thought Prompting</strong> (Wei et al., 2022) showed that explicitly asking models to reason step by step dramatically improved performance on multi-step tasks.</p></li><li><p><strong>Let&#8217;s Verify Step by Step</strong> (Lightman et al., 2023) demonstrated that training models to check intermediate steps increased reliability.</p></li><li><p><strong>Scaling LLM Test-Time Compute</strong> (Snell et al., 2024) formalized a new tradeoff: model capability depends not just on model size, but on how much computation you allow at inference time.</p></li></ul><p>Here&#8217;s what took me a while to appreciate: reasoning is just the model generating its own context.</p><blockquote><p><em><strong>Reasoning models made LLMs appear smarter by making them patient.</strong></em></p></blockquote><p>When you ask a reasoning model a hard question, it writes itself a scratchpad. It breaks down the problem, considers approaches, catches its own errors. That scratchpad becomes context for generating the final output.</p><p>Consider a math problem: &#8220;A store sells apples for $2 each and oranges for $3 each. If I buy 5 pieces of fruit and spend $12, how many of each did I buy?&#8221;</p><p>A standard model might pattern-match and guess. A reasoning model generates:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vhb_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vhb_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 424w, https://substackcdn.com/image/fetch/$s_!vhb_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 848w, https://substackcdn.com/image/fetch/$s_!vhb_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 1272w, https://substackcdn.com/image/fetch/$s_!vhb_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vhb_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png" width="1234" height="498" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:498,&quot;width&quot;:1234,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:81757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vhb_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 424w, https://substackcdn.com/image/fetch/$s_!vhb_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 848w, https://substackcdn.com/image/fetch/$s_!vhb_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 1272w, https://substackcdn.com/image/fetch/$s_!vhb_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbef5b3a7-3bef-41ef-af5c-af793efcb30d_1234x498.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Each line becomes context for the next. The model is having a conversation with itself, building up the information it needs.</p><p>This introduced test-time compute as a scaling axis.</p><blockquote><p><em><strong>Traditional ML says: to make a model smarter, train a larger model. Reasoning models suggest: let the model think longer.</strong></em></p></blockquote><p>For some problems, a smaller model reasoning 10x longer beats a 10x larger model that answers immediately. Model capability isn&#8217;t fixed. It&#8217;s partially a function of thinking time.</p><p>Reasoning models are not magic. On tasks where the correct abstraction is unknown, or where verification is impossible, longer chains often make things worse, not better. The model can reason confidently toward a wrong answer. </p><p>But for problems with checkable structure, the gains are real.</p><h3><strong>2. Training Reasoning: The Art of Building Environments</strong></h3><p>How do you train a model to reason well?</p><p>You can&#8217;t just show it examples of good reasoning. The model might mimic surface patterns without learning to think. And for many reasoning tasks, there&#8217;s no existing corpus of step-by-step solutions.</p><p>The answer that emerged, particularly in DeepSeek&#8217;s work, was reinforcement learning with a clever trick: focus on domains where you can verify the answer.</p><p>DeepSeek&#8217;s R1 model trained primarily on math and code. The reason is straightforward:</p><ul><li><p>In math, you can verify the solution</p></li><li><p>In code, you can run the program.</p></li></ul><p>The training loop: the model generates a reasoning chain and answer. Correct answers reinforce the reasoning. Wrong answers discourage it. Over iterations, the model learns what kinds of reasoning lead to correct answers.</p><p>What emerged without being explicitly programmed was fascinating. The model learned to backtrack at dead ends. It learned to verify its own work. It learned to try alternative approaches. </p><p>These behaviors weren&#8217;t hard-coded. They arose purely from the reinforcement signal.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mbkc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mbkc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 424w, https://substackcdn.com/image/fetch/$s_!mbkc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 848w, https://substackcdn.com/image/fetch/$s_!mbkc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 1272w, https://substackcdn.com/image/fetch/$s_!mbkc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mbkc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png" width="491" height="303" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb2fd936-4a18-48e8-905b-ae204954e206_491x303.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:303,&quot;width&quot;:491,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:52835,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mbkc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 424w, https://substackcdn.com/image/fetch/$s_!mbkc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 848w, https://substackcdn.com/image/fetch/$s_!mbkc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 1272w, https://substackcdn.com/image/fetch/$s_!mbkc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2fd936-4a18-48e8-905b-ae204954e206_491x303.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Suprisingly the models trained on code and math got better at everything. The skills required to solve a math problem are the same skills required for complex instruction following in any domain: parse the request precisely, break it into steps, execute without skipping, check your work, correct errors. </p><p>A model that learns to work through a multi-step proof has also learned to work through a multi-step business analysis. The reasoning transfers.</p><p>This has a striking implication: RL training on verifiable domains may reduce the need for task-specific fine-tuning. If you want better legal analysis, you might not need legal data. </p><p>You need a model trained to reason carefully, then provide legal context. The reasoning is general; only the context is specific.</p><p>This connects directly to why context engineering became so powerful. When base models improved at following complex instructions, the leverage shifted. </p><p>You no longer needed to modify weights for domain-specific behavior. You just needed domain-specific context.</p><p>The limitation is real: this works for domains with checkable answers, less well for subjective tasks. Writing, analysis, creative work: you can&#8217;t just run a verifier.</p><p>For these, researchers turned to Constitutional AI and RLAIF, where another model provides feedback. Promising but messier.</p><h3><strong>3. Memory: Context That Persists</strong></h3><p>Language models historically suffered from amnesia. Every conversation started fresh.</p><p>2025 changed that.</p><p>OpenAI and Google shipped memory for ChatGPT and Gemini. These systems remember facts across conversations: your preferences, your context. For millions of users, AI assistants became meaningfully personal.</p><p><strong>MemGPT</strong> (Packer et al., 2023) proposed treating memory like an operating system: fast working memory (the context window) plus slower archival memory (external storage). The system learns to move information between layers, loading relevant memories when needed.</p><p><strong>Cognitive Architectures for Language Agents</strong> (Sumers et al., 2024) organized memory into types: episodic (what happened), semantic (what things mean), procedural (how to do things).</p><p>Anthropic&#8217;s Claude Skills took this further: users define reusable instructions that apply across conversations. You specify once (&#8220;use this coding style&#8221;) and it applies automatically. Same weights, different context, different behavior.</p><blockquote><p><em><strong>The weights stayed constant while the context evolved entirely.</strong></em></p></blockquote><p>Memory is another form of context engineering that persists context across time while keeping the model unchanged.</p><h3><strong>4. Agents: Context From the World</strong></h3><p>If reasoning is self-generated context and memory is persistent context, agents represent context gathered from the world.</p><p>An agent takes actions: searches the web, runs code, calls APIs. Each action produces information that becomes context for the next step.</p><p><strong>ReAct</strong> (Yao et al., 2022) established the pattern: alternate between thinking and acting. Here&#8217;s a simplified example:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WE6k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WE6k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 424w, https://substackcdn.com/image/fetch/$s_!WE6k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 848w, https://substackcdn.com/image/fetch/$s_!WE6k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 1272w, https://substackcdn.com/image/fetch/$s_!WE6k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WE6k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png" width="498" height="322" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:322,&quot;width&quot;:498,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:49497,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WE6k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 424w, https://substackcdn.com/image/fetch/$s_!WE6k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 848w, https://substackcdn.com/image/fetch/$s_!WE6k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 1272w, https://substackcdn.com/image/fetch/$s_!WE6k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9eee66e-3974-41a9-96b2-44cecaf24c7a_498x322.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The model builds context through interaction with tools. Each action fills in information it didn&#8217;t have.</p><p>In 2025, agents became both more capable and more sobering. Anthropic released Claude with computer use. OpenAI shipped similar capabilities. </p><p>But <strong>Language Agents as Hackers</strong> (Yang et al., 2024) showed that even capable agents failed on multi-step tasks requiring genuine problem-solving. They could use tools but lacked strategic thinking over long horizons.</p><p>The pattern: agents work well for tasks with clear steps, recoverable errors, frequent feedback. They struggle with long-term planning, irreversible actions, ambiguous goals. Most &#8220;agent failures&#8221; are actually context failures&#8212;the agent had the wrong information at the wrong time.</p><div><hr></div><h2><strong>Deep Research: Agents That Do Your Homework</strong></h2><p>The most ambitious agent applications fell under &#8220;Deep Research&#8221;: systems that conduct multi-step research autonomously, producing comprehensive reports.</p><p>OpenAI&#8217;s Deep Research, Google&#8217;s Gemini Deep Research, Perplexity, Grok DeepSearch all shipped in 2025. These aren&#8217;t simple RAG. They plan strategies, adapt based on findings, and orchestrate complex workflows.</p><p>Deep Research Architecture:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uod3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uod3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 424w, https://substackcdn.com/image/fetch/$s_!uod3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 848w, https://substackcdn.com/image/fetch/$s_!uod3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 1272w, https://substackcdn.com/image/fetch/$s_!uod3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uod3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png" width="1218" height="562" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4779204-61d8-4845-88ea-fac956f4879e_1218x562.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:562,&quot;width&quot;:1218,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:111518,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uod3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 424w, https://substackcdn.com/image/fetch/$s_!uod3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 848w, https://substackcdn.com/image/fetch/$s_!uod3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 1272w, https://substackcdn.com/image/fetch/$s_!uod3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4779204-61d8-4845-88ea-fac956f4879e_1218x562.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The Model Context Protocol (MCP) emerged as a standard for agents to discover and invoke tools dynamically. The best systems use RL to train agents end-to-end on output quality, not just individual steps.</p><p>For information-intensive tasks with clear objectives, autonomous research crossed into usefulness. They struggle with authentication, rapidly changing information, and tasks requiring genuine expertise. But they&#8217;ve automated information aggregation.</p><div><hr></div><h2><strong>The End of Naive Fine-Tuning</strong></h2><p>In 2025, fine-tuning fell out of favor for frontier models. For most teams, it&#8217;s now a last resort, not a default.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MxSL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MxSL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 424w, https://substackcdn.com/image/fetch/$s_!MxSL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 848w, https://substackcdn.com/image/fetch/$s_!MxSL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 1272w, https://substackcdn.com/image/fetch/$s_!MxSL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MxSL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png" width="1210" height="258" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:258,&quot;width&quot;:1210,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:157346,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MxSL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 424w, https://substackcdn.com/image/fetch/$s_!MxSL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 848w, https://substackcdn.com/image/fetch/$s_!MxSL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 1272w, https://substackcdn.com/image/fetch/$s_!MxSL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63b26e9e-5b69-4312-a92b-fdd4ea74da7e_1210x258.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Several things drove this.</p><p>First, frontier models got good enough that fine-tuning often degraded general capabilities.</p><p>Second, models got so large that fine-tuning became expensive and slow.</p><p>Third, context windows expanded from 4K to 128K, even 1M+. Instruction-following improved dramatically. This just means that if you can describe the task, no need to fine-tune models.</p><p>But for most teams: invest in context engineering infrastructure. The ROI is higher, and it transfers when you upgrade base models.</p><p>Anthropic illustrates this directly. Rather than fine-tuning Claude for every task, they built a &#8220;skills&#8221; system: curated instructions for specific domains loaded into context when relevant. Same weights, different context, different behavior.</p><blockquote><p><em><strong>Nothing changed in the weights. Everything changed in the context.</strong></em></p></blockquote><div><hr></div><h2><strong>Voice: The Year It Actually Worked</strong></h2><p>Voice serves as the delivery layer for context engineering. But delivery matters: context that arrives 2 seconds late is useless. And voice captures richer input than text &#8212; tone, hesitation, the rambling that reveals what someone actually thinks.</p><p>Context engineering focuses on the right information to the model, while voice focuses on the right information to and from the human, fast enough to matter.</p><p>Voice interfaces became genuinely conversational in 2025. The target: under 500ms from when the user stops speaking to when they hear a response.</p><h5><strong>Two Architectures</strong></h5><p>Most voice agents use a pipeline: speech-to-text, then LLM, then text-to-speech. The alternative is end-to-end: audio in, audio out. GPT-4o was the breakthrough. Kyutai&#8217;s Moshi showed this wasn&#8217;t limited to closed models.</p><p>Both have tradeoffs. End-to-end preserves more information and achieves lower latency. Pipelines are more flexible: swap components, inspect intermediate text, debug easily. Most production systems still use pipelines.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aZ3L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aZ3L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 424w, https://substackcdn.com/image/fetch/$s_!aZ3L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 848w, https://substackcdn.com/image/fetch/$s_!aZ3L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 1272w, https://substackcdn.com/image/fetch/$s_!aZ3L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aZ3L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png" width="1248" height="484" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:484,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:86524,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aZ3L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 424w, https://substackcdn.com/image/fetch/$s_!aZ3L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 848w, https://substackcdn.com/image/fetch/$s_!aZ3L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 1272w, https://substackcdn.com/image/fetch/$s_!aZ3L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0635bef1-d463-4e05-b8e0-b26b2f8ef42b_1248x484.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h5><strong>Voice Quality</strong></h5><p>TTS improved along several dimensions. Emotional range: models learned to convey warmth, concern, excitement. Hume&#8217;s EVI explicitly optimized for emotional expressiveness. Prosody got natural: context-aware pacing and emphasis. Voice cloning matured: reasonable custom voices from minutes of audio.</p><h5><strong>Turn Detection and Interruption</strong></h5><p>Orchestration, rather than any single component, is the hardest problem.</p><p>Turn detection: knowing when the user finished speaking. Wait too long and it feels slow. Jump in early and you cut people off.</p><p>Interruption handling: agents now monitor for user speech while generating responses, stopping gracefully when interrupted.</p><p>Backchanneling appeared: the &#8220;mm-hmm&#8221; sounds that signal listening.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FUQ5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FUQ5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 424w, https://substackcdn.com/image/fetch/$s_!FUQ5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 848w, https://substackcdn.com/image/fetch/$s_!FUQ5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 1272w, https://substackcdn.com/image/fetch/$s_!FUQ5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FUQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png" width="428" height="356" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:428,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:46134,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FUQ5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 424w, https://substackcdn.com/image/fetch/$s_!FUQ5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 848w, https://substackcdn.com/image/fetch/$s_!FUQ5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 1272w, https://substackcdn.com/image/fetch/$s_!FUQ5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf0c93e5-c226-4551-bdf2-ae2be4274ff9_428x356.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Frameworks like Pipecat, LiveKit Agents, and Vapi emerged to handle this orchestration, letting developers focus on conversation design.</p><p>Voice is about bandwidth. Text is high-effort, low-bandwidth. Voice is lower-effort, higher-bandwidth: ramble, think out loud, convey things that would take paragraphs. Voice captures the actual skill of verbal communication.</p><div><hr></div><h2><strong>An Honest Assessment: Where AI Works, Where It Doesn&#8217;t</strong></h2><p>I&#8217;ve painted 2025 as a year of progress, and I believe that. But the ROI is concentrated in specific pockets.</p><p><strong>Where AI works well:</strong></p><p><strong>[+]</strong> Tasks where success is verifiable (code that runs, math that checks)</p><p><strong>[+]</strong> Tasks where errors are catchable and recoverable</p><p><strong>[+]</strong> Tasks that augment humans rather than replace them</p><p><strong>[+]</strong> Tasks where humans review before consequences happen</p><p><strong>Where AI still struggles:</strong></p><p><strong>[&#8722;]</strong> Tasks requiring long-horizon planning</p><p><strong>[&#8722;]</strong> Tasks where errors compound silently</p><p><strong>[&#8722;]</strong> Tasks requiring common sense about the physical world</p><p><strong>[&#8722;]</strong> Tasks where you need to know what you don&#8217;t know</p><p>The pattern: AI is most reliable with tight feedback loops. When you can check output, catch mistakes, and iterate, it works. When AI operates autonomously without supervision, problems accumulate.</p><div><hr></div><h2><strong>Where This Converges: The Reason We Built AmpUp.AI</strong></h2><p>Once you see the world this way, a certain class of products becomes inevitable.</p><blockquote><p><em><strong>Sales is the perfect proving ground for context engineering because of its real-time, high-stakes nature.</strong></em></p></blockquote><p>Sales is a fundamentally verbal skill. Success depends on split-second synthesis: customer signals, product knowledge, competitive positioning. The gap between knowing what to do and doing it under pressure is where deals are won or lost.</p><p>Sales is the ultimate test-time compute environment. A rep has 200 milliseconds to process a &#8220;no&#8221; and figure out how to turn it into a &#8220;yes.&#8221; If the AI providing context has 2 seconds of latency, it&#8217;s useless. </p><p>If it doesn&#8217;t have the lived context of the last three meetings, it&#8217;s dangerous.</p><p>We realized that if we could solve context for the world&#8217;s most demanding verbal environment, we could solve it for anything.</p><p>That&#8217;s what we&#8217;re building at AmpUp. But the voice agents are only half the story.</p><p>AmpUp continuously analyzes sales calls and CRM interactions to extract structured signals across multiple levels:</p><ul><li><p><strong>Meeting-level:</strong> objections raised, commitments made, risk signals surfaced</p></li><li><p><strong>Account-level:</strong> deal health, MEDDPICC gaps, competitive positioning</p></li><li><p><strong>Rep-level:</strong> skill patterns, strengths, and coaching priorities</p></li><li><p><strong>Org-level:</strong> behaviors that drive progression, where deals stall, what the market is signaling</p></li></ul><p>This creates a hierarchical ontology of sales intelligence.</p><p>When a rep is about to join a call, we surface not just the notes but also the specific objections this prospect raised last time, how similar deals at this stage typically progress, what this rep specifically needs to work on, and what&#8217;s worked for top performers in comparable situations.</p><p>The context is structured, layered, and specific to this moment. That context then flows into two applications:</p><p><strong>Pre-call preparation:</strong> Before a meeting, surface relevant account history, competitive intelligence, and suggested approaches. Not a generic brief. Specific context for this conversation, informed by patterns across thousands of similar interactions.</p><p><strong>Deliberate practice:</strong> After analyzing call patterns, identify skill gaps and provide low-stakes voice practice against realistic scenarios. The scenarios aren&#8217;t generic. They&#8217;re generated from the actual objections this rep struggles with, the actual competitive situations they face, the actual gaps in their discovery process.</p><p>This works because it sits in the sweet spot:</p><ul><li><p>Real value: Practice helps, and context-aware practice helps more</p></li><li><p>Voice is right modality: Sales is verbal; typing misses most of what matters</p></li><li><p>Human in the loop: The point is to train the human, not replace them</p></li><li><p>Context engineering as core: The entire platform is about extracting, organizing, and delivering the right context at the right moment</p></li></ul><div><hr></div><h2><strong>What Most AI Teams Still Get Wrong</strong></h2><p>After a year of watching teams build with these technologies, patterns emerge. The mistakes are consistent:</p><ol><li><p><strong>They optimize prompts instead of systems.</strong> A prompt is one piece of context. The retrieval pipeline, the memory architecture, the feedback loops&#8212;those determine whether the prompt even matters.</p></li><li><p><strong>They measure accuracy instead of recoverability.</strong> In production, errors happen. What matters is whether users can catch them and the system can adapt. A 95% accurate system that fails silently is worse than an 85% accurate system with good error signals.</p></li><li><p><strong>They deploy agents without feedback loops.</strong> An agent that can&#8217;t learn from its mistakes will keep making them. The best agent systems feed outcomes back into context for the next run.</p></li><li><p><strong>They treat context as text, not state.</strong> Context goes beyond words in a prompt&#8212;it&#8217;s the evolving state of a conversation, a task, a user relationship. Managing that state is the hard engineering problem.</p></li><li><p><strong>They chase model upgrades instead of context upgrades.</strong> Switching from GPT-4 to GPT-5 might give you 10% improvement. Fixing your retrieval pipeline might give you 50%. The leverage is asymmetric.</p></li></ol><div><hr></div><h2><strong>What&#8217;s Coming in 2026: The Context Engineering Era</strong></h2><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ch0f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ch0f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 424w, https://substackcdn.com/image/fetch/$s_!ch0f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 848w, https://substackcdn.com/image/fetch/$s_!ch0f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 1272w, https://substackcdn.com/image/fetch/$s_!ch0f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ch0f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png" width="545" height="877" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f9ac4359-9403-4c88-8354-de525477765c_545x877.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:877,&quot;width&quot;:545,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:113057,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/182943538?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ch0f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 424w, https://substackcdn.com/image/fetch/$s_!ch0f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 848w, https://substackcdn.com/image/fetch/$s_!ch0f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 1272w, https://substackcdn.com/image/fetch/$s_!ch0f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9ac4359-9403-4c88-8354-de525477765c_545x877.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>For Builders: The Core Message</strong></h2><p>Focus on context. As models become increasingly commodity, the context system becomes the product.</p><p>The technology matured in ways that make it harder to dismiss and easier to use. The hard work of turning this into real value is still ahead.<code> </code></p><p>The next generation of AI builders will call themselves context engineers rather than model trainers.</p><div><hr></div><h2><strong>AmpUp Context Labs</strong></h2><p><em><strong><a href="https://www.ampup.ai/context-is-all-you-need-agents">Experience context engineering in practice</a>. </strong>Same frontier model, three context configurations, three different capabilities.</em></p><p><em><strong><a href="https://www.ampup.ai/context-is-all-you-need-agents">Explore our Agents Now!</a> - <a href="https://www.ampup.ai/context-is-all-you-need-agents">https://www.ampup.ai/context-is-all-you-need-agents</a></strong></em></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Forecast Was a Story]]></title><description><![CDATA[Why Most Sales Forecasts Are Stories, Not Data]]></description><link>https://amit.ampup.ai/p/the-forecast-was-a-story</link><guid isPermaLink="false">https://amit.ampup.ai/p/the-forecast-was-a-story</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Tue, 16 Dec 2025 15:40:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yArA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most CROs missed their forecast last quarter.</p><p>Not because reps sandbagged. Not because deals slipped. Because <strong>58% of &#8220;committed&#8221; pipeline was missing clear buying signals from the customer.</strong></p><p>That number isn&#8217;t a typo. That isn&#8217;t a forecasting problem.</p><p><strong>It is fiction</strong>.</p><p>Forecasts fail not because reps lie, but because stories replace customer evidence. Here&#8217;s how revenue gets imagined into existence, and how to stop it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yArA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yArA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yArA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yArA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yArA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yArA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1009896,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/181784461?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yArA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yArA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yArA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yArA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29207212-8fb7-4d06-9300-96e96f7e34bf_1408x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">58% of your pipeline is evaporating. You just can&#8217;t see it yet.</figcaption></figure></div><h2><strong>The Scene That Repeats Every Quarter</strong></h2><p>It&#8217;s week 10. The CRO is on the forecast call, and Ryan is walking through his commit. He&#8217;s a solid performer who knows the product cold.</p><p>&#8220;This one&#8217;s solid,&#8221; Ryan says. &#8220;We had a great demo last week. The VP of Ops loved the ROI analysis. I&#8217;m meeting with procurement on Thursday, and then we should be wrapping up legal by the 28th.&#8221;</p><p>The CRO has heard this before. &#8220;What did the VP actually say about timeline?&#8221;</p><p>&#8220;He said they&#8217;re motivated to move quickly,&#8221; Ryan replies.</p><p>&#8220;Did he say why?&#8221;</p><p>&#8220;I mean... they&#8217;ve been dealing with this problem for a while.&#8221;</p><p>The CRO knows what this means. Ryan has built a story, a plausible sequence of events that ends in a signed contract. But the story exists in Ryan&#8217;s head, not in the customer&#8217;s words.</p><p>When we analyzed Ryan&#8217;s pipeline, 7 of his 12 &#8220;committed&#8221; opportunities were missing clear customer evidence for at least one of the three critical buying signals.</p><p>Across the team, the pattern held: </p><p><strong>58% of committed pipeline had the same gap.</strong></p><p><strong>That is a $2.3M fiction masquerading as a pipeline.</strong></p><h2><strong>The Inside View and the Outside View</strong></h2><p>Daniel Kahneman won a Nobel Prize for explaining why this happens. He showed that humans reason about the future in two fundamentally different ways&#8212;and we consistently choose the wrong one.</p><p><strong>The Inside View:</strong> You construct a narrative by imagining the specific sequence of events that will lead to your outcome. You picture the procurement meeting, the handshake, the signature. The story feels real because you can see it.</p><p><strong>The Outside View:</strong> You ask what actually happens to deals like this one. When you look at the data&#8212;Stage 3 opportunities with 30 days left and no procurement engagement&#8212;the failure rate is over 80%.</p><p>Ryan isn&#8217;t lying. His brain is doing what all human brains do: constructing a plausible story and mistaking it for a prediction.</p><div class="pullquote"><p><strong>The outside view tells you the truth.</strong> But to use it, you need to hear evidence directly from the customer. Not interpretation. Not what the rep thinks they heard. <strong>Only what the customer actually said matters.</strong></p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yTI_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yTI_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 424w, https://substackcdn.com/image/fetch/$s_!yTI_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 848w, https://substackcdn.com/image/fetch/$s_!yTI_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 1272w, https://substackcdn.com/image/fetch/$s_!yTI_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yTI_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png" width="772" height="563" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:563,&quot;width&quot;:772,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:163348,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/181784461?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yTI_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 424w, https://substackcdn.com/image/fetch/$s_!yTI_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 848w, https://substackcdn.com/image/fetch/$s_!yTI_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 1272w, https://substackcdn.com/image/fetch/$s_!yTI_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c547bf5-a9e3-4174-8141-8d453d7a70fc_772x563.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The CRM showed a commit. The transcript showed a conversation.</p><p>Reps project problems onto customers based on title. They pattern-match vague statements into familiar scripts. Optimism feels productive. Evidence feels dangerous.</p><h2><strong>The Only Evidence That Matters</strong></h2><p>You can&#8217;t believe a deal into existence. Only the customer can say it&#8217;s real.</p><p>For a deal to exist &#8212; the customer has to express three things in their own words &#8212;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!09Xh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!09Xh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 424w, https://substackcdn.com/image/fetch/$s_!09Xh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 848w, https://substackcdn.com/image/fetch/$s_!09Xh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 1272w, https://substackcdn.com/image/fetch/$s_!09Xh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!09Xh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png" width="782" height="482" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:482,&quot;width&quot;:782,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:83807,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/181784461?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!09Xh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 424w, https://substackcdn.com/image/fetch/$s_!09Xh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 848w, https://substackcdn.com/image/fetch/$s_!09Xh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 1272w, https://substackcdn.com/image/fetch/$s_!09Xh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6870f8a4-bd45-4453-8385-906cd4bac4e7_782x482.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If any of these is missing, there is no deal. </p><p><strong>The deal is already lost, you just haven&#8217;t admitted it yet.</strong></p><h2><strong>Why This Is Brutal</strong></h2><p>The &#8220;three whys&#8221; leave you nowhere to hide. That pipeline you&#8217;ve been nursing? Half of it evaporates when you apply this standard honestly.</p><p>Reps resist this out of fear:</p><ul><li><p>Fear of hearing no.</p></li><li><p>Fear of discovering sunk costs.</p></li><li><p>Fear of asking a question that exposes a hole in their case.</p></li></ul><p>But fear traps you in the &#8220;friend zone&#8221;&#8212;guaranteeing you waste resources on a prospect who likes you but will never buy.</p><blockquote><p><strong>The best sellers ask the uncomfortable questions early. They would rather hear &#8220;we&#8217;re not ready&#8221; in week two than suffer a slow &#8220;no&#8221; in week twelve.</strong></p></blockquote><h2><strong>The Real Problem Isn&#8217;t Skill, It&#8217;s Emotional Cost</strong></h2><p>Sales leaders see the problem and reach for the familiar solution: more training. Better frameworks. Tighter qualification criteria. MEDPIC. MEDDIC. BANT. The three whys.</p><p>But reps already know these frameworks. The issue isn&#8217;t awareness&#8212;it&#8217;s <strong>emotional labor</strong>.</p><p>When a rep asks &#8220;Why now?&#8221; and the prospect says &#8220;We&#8217;d like to figure this out,&#8221; the rep faces a choice:</p><p><strong>Option A</strong> - Document the Truth. Watch the deal slip. Explain why a commit is now at risk.</p><p><strong>Option B</strong> - Fill the gap with assumptions. Keep the CRM Green. Avoid discomfort.</p><p>Most reps choose Option B. Not because they&#8217;re dishonest, but because documenting uncertainty feels like failure when quota is breathing down their neck.</p><p>This is why training doesn&#8217;t fix it. You&#8217;re not fighting a knowledge problem. </p><p><strong>You&#8217;re fighting human psychology under pressure.</strong></p><h2><strong>The Gains Trap</strong></h2><p>Even when reps do ask &#8220;Why now?&#8221;, they accept weak answers because they sound plausible.</p><p>Customer says: <em>&#8220;This is good for us, and the longer we wait, the less we get to capitalize on the gains.&#8221;</em></p><p>Rep hears: <em>&#8220;Why now&#8221;</em> &#10003;</p><p>But this is <strong>weak sauce</strong>. But gains don&#8217;t drive actions. Loss does. Humans are wired to avoid pain, not chase upside. Kahneman proved this&#8212;we&#8217;re 2-3x more motivated by loss aversion than gain seeking.</p><p>A real &#8220;Why now&#8221; sounds like this:</p><p><em>&#8220;We&#8217;re launching a major product next year. We&#8217;ll need to hire 20 new reps. If we don&#8217;t have a system to get them ramped faster, those reps will miss quota, we&#8217;ll miss the launch number, and I&#8217;ll be the one explaining to the board why we left $10M on the table.&#8221;</em></p><p>That&#8217;s not about gains. That&#8217;s about <strong>concrete, imminent, career-threatening loss</strong>.</p><div class="pullquote"><p>If your customer can&#8217;t articulate a loss scenario, you don&#8217;t have &#8220;Why now.&#8221; </p><p>You have &#8220;Why maybe someday.&#8221;</p></div><h2><strong>The Mirror That Sets You Free</strong></h2><p>The hardest part isn&#8217;t knowing at to look for. It&#8217;s holding the mirror steady when your quota is screaming at you to believe. </p><p>This is where traditional training breaks down.</p><p>The only way to solve this is to remove the emotional friction.</p><p>AI does two things traditional training can&#8217;t.</p><p><strong>First,</strong> it creates the first draft for you. You don&#8217;t have to agonize over admitting what&#8217;s missing. The system flags it neutrally: No cost quantified in transcript.</p><p>There&#8217;s no judgment. No confrontation. Just evidence.</p><p><strong>Second,</strong> it challenges assumptions before they harden into commitments.<br>If a rep writes &#8220;Customer confirmed $500K pain&#8221; but the transcript only shows &#8220;This is costing us,&#8221; the system calls it out.</p><p>The story doesn&#8217;t make it into the forecast.</p><p>When every call is transcribed and analyzed, you can&#8217;t remember the conversation differently than it happened. Forecast calls stop being debates of opinion and start becoming discussions of evidence.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yN1t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yN1t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 424w, https://substackcdn.com/image/fetch/$s_!yN1t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 848w, https://substackcdn.com/image/fetch/$s_!yN1t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 1272w, https://substackcdn.com/image/fetch/$s_!yN1t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yN1t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png" width="771" height="241" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:241,&quot;width&quot;:771,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:41215,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/181784461?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yN1t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 424w, https://substackcdn.com/image/fetch/$s_!yN1t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 848w, https://substackcdn.com/image/fetch/$s_!yN1t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 1272w, https://substackcdn.com/image/fetch/$s_!yN1t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fba0708fe-2957-4570-bc31-99b2ae60cd02_771x241.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Practice That Transfers</strong></h2><p>But awareness alone doesn&#8217;t change behavior. You need repetition in context.</p><p>Traditional role-play is often a scheduled event disconnected from live deals. What changes behavior is practice that is bite-sized, highly personalized, and built in the context of a specific deal.</p><p>With AmpUp, the system knows Ryan&#8217;s patterns. For example, he backs off when it is time to press for commitment. Before his next call, Ryan practices that exact conversation. He sees how a top performer handled the same objection and tries the direct question out loud until it feels natural. </p><p>That&#8217;s how stories turn into evidence.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.ampup.ai/" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_fzo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 424w, https://substackcdn.com/image/fetch/$s_!_fzo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 848w, https://substackcdn.com/image/fetch/$s_!_fzo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 1272w, https://substackcdn.com/image/fetch/$s_!_fzo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_fzo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png" width="777" height="390" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:390,&quot;width&quot;:777,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:276708,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://www.ampup.ai/&quot;,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://amit.ampup.ai/i/181784461?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_fzo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 424w, https://substackcdn.com/image/fetch/$s_!_fzo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 848w, https://substackcdn.com/image/fetch/$s_!_fzo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 1272w, https://substackcdn.com/image/fetch/$s_!_fzo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b2fc6ab-2a52-4fbd-b2e9-b6dfc1f5b486_777x390.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Safe Path Is Dissolving]]></title><description><![CDATA[It&#8217;s been a while.]]></description><link>https://amit.ampup.ai/p/the-safe-path-is-dissolving</link><guid isPermaLink="false">https://amit.ampup.ai/p/the-safe-path-is-dissolving</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Sun, 30 Nov 2025 21:07:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!onwI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!onwI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!onwI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!onwI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!onwI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!onwI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!onwI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1458079,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://ampuphq.substack.com/i/180346478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!onwI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 424w, https://substackcdn.com/image/fetch/$s_!onwI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 848w, https://substackcdn.com/image/fetch/$s_!onwI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 1272w, https://substackcdn.com/image/fetch/$s_!onwI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75967518-0985-4662-ad4c-c38477eabfdd_1408x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><em>It&#8217;s been a while. If you&#8217;re still subscribed, thank you for your patience.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>A lot has changed since I last wrote here. After twelve years at ThoughtSpot, most of my time now goes into a new company I started called AmpUp&#8212;we help sales teams close the gap between their best reps and everyone else. And somewhere along the way, I learned something about myself that I wish I&#8217;d understood decades earlier.</em></p><p><em>This post is about that lesson. It started as advice to a friend&#8217;s daughter, but it turned into something more personal.</em></p><p>&#8212;</p><p>A friend asked me for advice about her daughter, a second-year college student studying electronics. The daughter is anxious about the future&#8212;about what jobs will exist, what skills will matter, whether she&#8217;s on the right path. I found myself writing a longer response than I expected, because I&#8217;ve been thinking about this a lot.</p><p>Here&#8217;s what I&#8217;ve come to believe: For the better part of a century, we&#8217;ve lived inside a particular bargain. Go to school, go to college, get good grades, follow directions, and society will give you a safe place. A job. A trajectory. A life you can more or less predict.</p><p>That bargain is dissolving.</p><h1><strong>The Theater-to-Netflix Problem</strong></h1><p>Before mass media&#8212;before radio and television&#8212;if you wanted entertainment, you went to the local theater. That was the best anyone could do. Then technology made it possible to replicate human performance at near-zero marginal cost. Suddenly, for a few dollars, you could watch a billion-dollar production in your living room. Entertainment became a winner-take-all market almost overnight.</p><p>This is now happening to expertise. Engineering, medicine, law, financial advising, tax preparation&#8212;any domain where the core work involves applying established knowledge to routine problems is becoming compressible. Not disappearing. Compressing. A smaller number of people, augmented by AI, can do what used to require large teams.</p><p>If LLMs make it so that any routine task&#8212;anything that doesn&#8217;t require inventing new things&#8212;becomes easily automatable, then the remaining work shifts toward invention. Not just fundamental research, but entrepreneurial work: making bets, searching for niches, trying and failing and trying again until you land on something valuable. Having a sharp feedback loop so you fail fast and correct course. Being genuinely good at listening to people, asking real questions, synthesizing patterns no one has seen before.</p><h1><strong>The Comedian&#8217;s Career</strong></h1><p>A comedian can&#8217;t hide behind credentials. They walk on stage and either the room laughs or it doesn&#8217;t. They bomb, they adjust, they try again. They earn their place through repeated acts of vulnerability&#8212;not once, but every single night.</p><p>That used to be the exception. Artists and entrepreneurs lived that way while everyone else had the safety of defined roles and predictable advancement. Now that safety is dissolving. Your engineering career is becoming a comedy career whether you signed up for it or not.</p><p>The question isn&#8217;t whether you&#8217;ll face that exposure. It&#8217;s whether you&#8217;ll lean into it or spend years resisting the inevitable.</p><h1><strong>What I Got Wrong</strong></h1><p>I&#8217;m not writing this from some perch of having figured it out.</p><p>I spent years giving myself permission to stay comfortable. First there was the green card&#8212;I couldn&#8217;t take risks while my immigration status was uncertain. Then I became &#8220;the technical co-founder&#8221; at ThoughtSpot, which meant I had a lane. I stayed in that lane for twelve years, even when I sensed I had more to contribute in other domains&#8212;strategy, positioning, customer intuition. I told myself I was being responsible, playing to my strengths. Really, I was avoiding the harder work of developing judgment in areas where I wasn&#8217;t already good.</p><p>When I finally had the freedom to build something new, I tried to build everything&#8212;an agentic AI platform for anything and anyone. It felt expansive and safe at the same time. Safe because I never had to commit. I could stay in the realm of possibility, keep all doors open, avoid the exposure of saying &#8220;this is my bet, judge me on it.&#8221;</p><p>AmpUp exists because I finally made myself choose. Closing the gap between your best sales reps and everyone else. Not a platform for everyone. Not a solution for everything. A narrow lane where I could be wrong in a specific, measurable way.</p><p>That choosing was harder than any technical problem I&#8217;ve solved.</p><h1><strong>So What Does This Mean for a 20-Year-Old?</strong></h1><p>You don&#8217;t have to start a company. But you have to get specific about the future you want&#8212;even when you don&#8217;t have enough information to be certain. Especially then.</p><p>The method is: imagine the transformed version of a field you care about, then work backward to what skills and relationships matter today.</p><p><strong>Healthcare: </strong>In ten years, routine diagnosis and treatment may be largely automated. The premium shifts to rare disease combinations, cases that don&#8217;t fit patterns, interpreting what&#8217;s happening with patients who fall between categories. And the irreducible core: being present with people in their most vulnerable moments. Machines can diagnose; they can&#8217;t hold a hand. If this interests you, maybe you study less anatomy and more pattern recognition, systems thinking, communication. You find the 50 doctors already working on the frontier of this and figure out how to be useful to them.</p><p><strong>Finance: </strong>Basic analysis gets fully automated. The edge moves to judgment calls requiring synthesis of non-obvious signals&#8212;geopolitical intuition, founder psychology, cultural shifts that don&#8217;t show up in data yet. A student interested in this might study behavioral economics and history rather than just finance, seek out contrarian investors, and start writing publicly about patterns they notice. Building a track record of thinking, not just credentials.</p><p><strong>Law: </strong>Contract work and legal research get commoditized. What remains is high-stakes negotiation, novel regulatory territory (AI governance, bioethics, space law), and situations where human judgment and trust are essential. A student might focus less on case law and more on understanding how new technologies create legal vacuums, then find the handful of lawyers actually working on frontier issues and offer to help them think.</p><p><strong>Engineering: </strong>Implementation becomes cheaper. The premium shifts to problem selection&#8212;figuring out what to build, not just how. This means deeply understanding a domain rather than accumulating technical skills alone. The kid who spends a summer working at a trucking company and understanding their actual pain points may be better positioned than one who does another coding bootcamp.</p><p><strong>Creative fields: </strong>Production quality gets democratized. Distinction comes from having a genuine point of view, building direct relationships with an audience, operating at the intersection of fields. The question isn&#8217;t &#8220;how do I get hired by a studio&#8221; but &#8220;what do I have to say that no one else is saying, and who specifically cares?&#8221;</p><p>The common thread: In every case, the move is from &#8220;acquire credentials in established category&#8221; to &#8220;develop a thesis about where value is moving, find the people actually working on that frontier, and make yourself useful to them.&#8221;</p><p>It&#8217;s less about entrepreneurship as starting-a-company and more about entrepreneurial thinking&#8212;treating your career like a search problem rather than an optimization problem.</p><h1><strong>More Alive, More Scary</strong></h1><p>I know this sounds hard. It is hard. The safe path was easier&#8212;you just had to follow the script. But here&#8217;s the thing: the safe path was also, in a way, a trap. A lot of people got stuck in systems that never pushed them to discover what they were actually capable of. They continued as cogs. There was never enough internal motivation or external force to try something unsafe.</p><p>The world that&#8217;s emerging will be harder. But it might also push more people toward greatness&#8212;because there&#8217;s no longer a comfortable middle to hide in.</p><p>The definition of work is going to shift&#8212;maybe as dramatically as it did when we went from breaking our backs on farms to typing on glass screens in air-conditioned rooms. To a farmer in 1924, what we call &#8220;work&#8221; today would look like sorcery. Or leisure.</p><p>But here&#8217;s what won&#8217;t change: we won&#8217;t just sit around in bliss. The hedonic treadmill guarantees new status games, new struggles. The question is whether you&#8217;re playing games you chose or games that were handed to you.</p><h1><strong>Make Your Choices Right</strong></h1><p>A mentor once told me: <strong>&#8220;We don&#8217;t make right choices. We make our choices right.&#8221;</strong></p><p>I wish I&#8217;d understood that earlier. I spent years looking for the safe bet, the optimal path, the choice I couldn&#8217;t get wrong. That search was the trap. The real work was always on the other side of choosing&#8212;in the commitment, the iteration, the willingness to make whatever I chose into something worth having chosen.</p><p>The fear isn&#8217;t just &#8220;what if I don&#8217;t choose.&#8221; It&#8217;s &#8220;what if I choose wrong.&#8221; But that&#8217;s exactly what his words answer: there is no right choice waiting to be discovered. There&#8217;s only the choice you make and what you do with it.</p><p>I don&#8217;t have all the answers. This is a hypothesis. But it&#8217;s the one I&#8217;m betting on&#8212;for AmpUp, and for what I&#8217;d tell any young person navigating this moment.</p><p><strong>Pick a lane. Make a bet. Then make that bet right.</strong></p><p>&#8212;</p><p><em>If this resonated, I&#8217;d love to hear from you&#8212;especially if you&#8217;re a parent navigating this with your kids, or a young person figuring it out yourself. And if you&#8217;re curious about what I&#8217;m building at AmpUp, you can find us at <strong>ampup.ai</strong>.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Amit&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Coming soon]]></title><description><![CDATA[This is Amit&#8217;s Substack.]]></description><link>https://amit.ampup.ai/p/coming-soon</link><guid isPermaLink="false">https://amit.ampup.ai/p/coming-soon</guid><dc:creator><![CDATA[Amit Prakash]]></dc:creator><pubDate>Mon, 01 Sep 2025 00:28:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jPTW!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd0e656a-fce2-4254-a059-4c1a5eba5620_225x225.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is Amit&#8217;s Substack.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://amit.ampup.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://amit.ampup.ai/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>