The Post-Coding Era: What Happens When AI Writes the System?
.png)
Google DeepMind
Nicholas Moy was the Head of Research at Windsurf where he built the first multi-step coding agent. He now works with the Antigravity team at Google Deep Mind, focusing on unlocking the agentic reasoning capabilities of Gemini models.

Delphina
Hugo Bowne-Anderson is an independent data and AI consultant with extensive experience in the tech industry. He is the host of the industry podcast Vanishing Gradients, a podcast exploring developments in data science and AI. Previously, Hugo served as Head of Developer Relations at Outerbounds and held roles at Coiled and DataCamp, where his work in data science education reached over 3 million learners. He has taught at Yale University, Cold Spring Harbor Laboratory, and conferences like SciPy and PyCon, and is a passionate advocate for democratizing data skills and open-source tools.
Key Quotes
Key Takeaways
Engineer for Problem Invariants
In an era where models double in capability every few months, Nick suggests building software around “information and output invariants”: the fundamental data requirements and final work formats of a task. By focusing on what must be true regardless of the AI's power, teams can avoid the trap of re-writing their entire core logic every time a new frontier model is released.
The "Double Penalty" of Agent Errors
Shifting from co-driving (real-time assistance) to agentic engineering introduces a new risk: the "double penalty." If an agent fails, the human must pay the cognitive cost to "download" and understand the agent’s complex state before they can even begin to fix the error, making agent self-explanation and verification artifacts more valuable than the code itself.
PRs are Replacing the IDE
As agents move toward true autonomy, the editor becomes a secondary interface. The primary surface for supervising AI is shifting toward "agent manager modes" that resemble Pull Requests, where users review high-level artifacts, screen recordings of UI tests, and unit test logs rather than watching a cursor move in real-time.
Synthetic Data is not the Answer
While synthetic data offers scale and cleanliness, real-world user data is the only way to capture "soft, fuzzy distributions" of human behavior. Nick notes that the most durable data moat is learning "negative" preferences, such as the subtle, intuitive moments when a model should not show a suggestion, which is nearly impossible to generate synthetically.
Personalization is the Application Moat
While foundation model labs dominate general intelligence, application-specific startups have a "prime position" to win on per-user model personalization. By owning the direct relationship with the user, these companies can build feedback loops that tailor model behavior to individual styles in a way that centralized labs cannot replicate.
Be Prepared to Disrupt Your Cash Cow
Windsurf’s success came from a willingness to "prepare for disruption" while their autocomplete business was still thriving. They kept an agentic framework on the back burner for 18 months, periodically "testing the harness" with new model checkpoints until Claude finally "clicked," allowing them to pivot the product in just one month.
The Strategy of Betting On Capabilities
With the pace of model improvement accelerating, Nick encourages teams to bet on future capabilities. If a feature seems impossible or too expensive today, building the infrastructure for it now (as if the model were already 50% more capable) allows you to capture the market the moment the next frontier checkpoint drops.
You can read the full transcript here.
Timestamps
00:00 Introduction to Agentic AI in Software Development
01:21 Exploring the Shift from Co-Driving to Agentic AI
02:31 The Journey of Building Windsurf
04:33 Challenges and Successes in AI-Powered Coding
06:07 The Future of AI Agents in Software Engineering
08:34 Building and Maintaining a Competitive Edge
13:39 Product and Engineering Challenges in AI Development
16:28 The Evolution of AI Usage in Software Development
24:23 The Economics of AI and Model Training
32:48 The Future of AI Agents
41:05 Conclusion and Final Thoughts
Links
- Nicholas Moy on LinkedIn
- Introducing Google Antigravity, a New Era in AI-Assisted Software Development
- “A Flash of Deflation - Gemini 3 Flash represents a step function increase in model deflation: a gauntlet thrown” by Thomas Tunguz
- Tomasz Tunguz on Why a Trillion Dollars of Market Cap Is Up for Grabs (and How AI Teams Will Win It)
- High Signal podcast
- Watch the podcast episode on YouTube
- Delphina's Newsletter
Links From The Show
Transcript
In the spotlight: Our most popular episodes
Listen up: Our latest discussions
Hear the hottest takes on data science and AI.
Get the latest episodes in your inbox
Never miss an episode of High Signal by signing up for the Delphina newsletter.



.png)






