The AI Agent Reality Check: Most Users Spend Just 45 Seconds Per Session
If you follow AI news, you've probably seen impressive demos: agents that work for hours, autonomously completing complex tasks. Researchers talk about AI "agents" that can reason, plan, and execute multi-step workflows.
But here's what's actually happening in practice: most people use AI agents for 45 seconds.
That's the median session length, according to new data from Anthropic. It's a finding that challenges a lot of assumptions about how AI agents are being used—and how they should be designed.
What the Data Actually Shows
Anthropic studied how users interact with Claude Code and found something counterintuitive.
The typical interaction isn't a long-running autonomous session where the AI works for hours while you go to meetings. It's a quick back-and-forth, often under a minute.
Users are supervising heavily. They're not letting the AI run unattended. They're actively involved in every step—reviewing outputs, providing guidance, making decisions.
This doesn't mean agents aren't valuable. It means the value proposition is different than what the demos suggest.
The Trust Problem
Here's a striking number: new users use full auto approval roughly 20% of the time.
That means 80% of the time, users are requiring some form of human oversight before the AI proceeds. Even after users gain experience, auto approval only rises to about 40%.
The agents are asking for clarification more often than humans choose to interrupt. In complex tasks, Claude Code requests guidance about 16% of the time. Humans only intervene 7% of the time.
This tells us something important: users don't fully trust the system, and they're right to be cautious.
Beyond Engineering: Where Agents Are Actually Being Used
One of the most interesting findings: AI agents are spreading beyond software engineering into other business functions.
About half of agent calls are still software-related. But meaningful usage is showing up in:
- Back office automation (around 9%)
- Marketing and copy (around 4%)
- Sales and CRM (around 4%)
- Finance and accounting (around 4%)
This suggests the real opportunity isn't replacing developers. It's automating routine tasks across the organization—processes that don't require deep expertise but do require time.
Why Model Improvements Don't Automatically Mean More Autonomy
You might think that better models would lead to more autonomous agents. The logic: if the AI makes fewer mistakes, users would let it run more independently.
The data partially confirms this. As model success rates doubled, human interventions dropped from 5.4% to 3.3%.
But here's the nuance: autonomy depends on more than just model capability. It depends on:
- User trust
- Interaction design
- Defaults and settings
- Understanding of what the agent is doing
Even with significant model improvements, the human-in-the-loop remains essential. The question isn't whether to include humans. It's how to design the interaction so humans stay appropriately engaged without becoming a bottleneck.
The Policy Friction
Anthropic recently updated its documentation around OAuth tokens, and it caused a minor firestorm.
The new language suggested that tokens from Claude accounts couldn't be used in third-party products. The community response was swift and negative.
Anthropic clarified that it was a docs cleanup that caused confusion—they didn't intend to block legitimate agent SDK usage. The concern was about people using personal accounts in ways that violated terms.
But the incident reveals something: platform policy matters enormously for agent ecosystems. Developers need clear rules about what's allowed. Ambiguity creates uncertainty. Uncertainty slows adoption.
What This Means for Product Builders
If you're building AI agents, here are the practical implications:
-
Design for short sessions. The median is 45 seconds. Don't assume users want to set up long-running autonomous workflows. Build for quick, focused interactions.
-
Trust is the product. Users are cautious. They want oversight. Rather than pushing for full autonomy, build tools that make human-AI collaboration seamless.
-
Look beyond engineering. The fastest growth may not be in coding. It might be in back-office automation, marketing, sales—anywhere there are routine tasks that consume time.
-
Clear policies matter. If you're building on platform APIs, understand the terms of service. Changes can upend your business overnight.
The Honest Assessment
The agent hype cycle has created expectations that don't match reality. Yes, the technology is impressive. Yes, capabilities are improving rapidly.
But the data suggests a more nuanced picture. Agents aren't replacing humans anytime soon. They're becoming tools that humans use—often briefly, always with oversight—to get specific tasks done.
The companies that succeed won't be the ones building the most autonomous agents. They'll be the ones that understand how humans actually want to work with AI.
That's a harder problem than it sounds. But it's also a more valuable one to solve.