Blog

The End of Coding: What Happens When 4% of All GitHub Commits Are Written by AI

By TLDL

Claude Code now authors 4% of public GitHub commits. Inside the dramatic shift in what software engineering means when the code practically writes itself.

The End of Coding: What Happens When 4% of All GitHub Commits Are Written by AI

Here's a number that should make every software engineer pause: approximately 4% of all public GitHub commits are now authored by Claude Code. That's not a typo. Every 25th piece of open-source code hitting the internet comes from an AI agent, not a human.

And it's accelerating.

Boris Cherny, Head of Claude Code at Anthropic, has a provocative claim: "Coding is largely solved." Not in some abstract, someday-when-AGI-arrives way. Today. Right now.

The 200% Productivity Claim

Inside Anthropic, engineers have reported roughly a 200% increase in productivity since deploying Claude Code. That doesn't mean they write twice as much code. It means they review twice as many pull requests, ship twice as many features, and have essentially stopped writing code line-by-line altogether.

One striking quote from Boris: "I have not edited a single line by hand since November."

Let that sink in. A senior engineer at one of the most important AI labs hasn't manually written code in months. His job has transformed entirely—he directs the AI, reviews what it produces, and makes decisions about what to build next.

This isn't the future. This is happening now, at one of the most competitive AI companies in the world.

Beyond Code Generation: The Agentic Frontier

Here's where it gets interesting. If code generation is "solved," what's next?

The next frontier is agentic behavior—models that don't just generate code when asked, but proactively decide what to build, triage bugs, and execute work autonomously. Boris describes how Quad (Anthropic's agent) now looks at bug reports, inspects telemetry, and comes up with ideas for fixes. It's no longer a tool waiting to be used. It's a coworker taking initiative.

This shift is profound. Traditional software development follows a predictable pattern: humans decide what to build, humans write code, humans test it, humans ship it. Agentic workflows invert this. The AI proposes, executes, and iterates. Humans provide direction and oversight.

The job title is already shifting. Some predict "software engineer" will eventually disappear, replaced by "builder" or "product manager"—roles focused on defining what should exist rather than how to implement it.

The Product Is the Model

One of the most interesting philosophies Boris describes: "The product is the model. We want to expose it. We want to put the minimal scaffolding around it."

This is a deliberate choice. Many AI products wrap models in heavy workflows, rigid processes, and extensive guardrails. Anthropic's approach is the opposite: give the model access to tools and let it figure out the best path forward.

The results have been surprising. Users find creative, unexpected ways to leverage the model that the team never anticipated. The model adapts to workflows rather than forcing users into predetermined boxes.

The tradeoff is real. Minimal scaffolding means less predictable behavior. But for many use cases, the flexibility delivers far more value than rigid structure ever could.

The Safety Challenge

With power comes responsibility. Deploying autonomous agents that can take actions—write code, modify files, execute commands—requires robust safety measures.

Anthropic's approach combines three layers:

  1. Mechanistic interpretability: Studying the internal structure of models to understand how they represent concepts and make decisions
  2. Controlled evaluations: Rigorous testing in controlled environments before deployment
  3. In-the-wild observation: Releasing early to small groups to observe actual behavior in real-world scenarios

The last piece is particularly interesting. The team deliberately releases research previews early because lab testing can never capture every possible failure mode. Real users find edge cases that no evaluation framework anticipates.

One striking finding: "A single neuron might correspond to a dozen concepts. And if it's activated together with other neurons this is called superposition." Understanding this complexity is essential for building safe agents—but it also reveals just how little we understand about how these models actually work.

What This Means for Engineers

If you're writing code today, here's the practical implication: your value is shifting from producing code to directing code production.

The engineers who thrive won't be the ones who can write the fastest loop or remember the most API signatures. They'll be the ones who can specify what needs to be built, evaluate whether the output is correct, and make architectural decisions about how pieces fit together.

Boris's advice: "Use the most capable model. Currently that's Opus 4.6." More capable models often complete tasks with fewer tokens and less correction, making them more cost-effective than chaining weaker models together.

The era of coding as a primary skill is ending. The era of engineering—understanding systems, making decisions, directing automated labor—is just beginning.

The Honest Assessment

Not everyone agrees that coding is "solved." Some argue that deep systems understanding, domain expertise, and safety oversight will always require human engineers. Others worry about skill atrophy if we stop writing code entirely.

These are valid concerns. But the direction of travel is clear. Whether you're ready or not, the role of software engineers is fundamentally changing. The question isn't whether to adapt. It's how quickly you can make the shift.

Related

Author

T

TLDL

AI-powered podcast insights

← Back to blog

Enjoyed this article?

Get the best AI insights delivered to your inbox daily.

Newsletter

Stay ahead of the curve

Key insights from top tech podcasts, delivered daily. Join 10,000+ engineers, founders, and investors.

One email per day. Unsubscribe anytime.