Blog

AI Agent Safety: What You Should and Shouldn't Automate

By TLDLTopic: AI Agents

The line between helpful automation and dangerous automation. What solo founders need to know before handing over sensitive tasks to AI agents.

There's a spectrum of tasks you can hand to an AI agent. On one end: harmless. On the other end: career-ending mistakes.

Here's how to think about where the line is.

The risk framework

Every task falls into one of four quadrants:

Low Impact High Impact
Reversible ✅ Safe to automate ⚠️ Review before commit
Irreversible ⚠️ Document heavily ❌ Never automate

Safe to automate (green)

  • Research: Summarizing articles, gathering competitive intel, compiling lists
  • Drafting: First drafts of emails, blog posts, documentation
  • Formatting: Converting between formats, cleaning data, organizing files
  • Scheduling: Finding meeting times, sending calendar invites (with human approval)

Review before commit (yellow)

  • Outbound communication: Emails to customers, social posts, press releases
  • Code changes: AI can write code, but you should review before deploying
  • Financial calculations: AI can help analyze, but verify before acting
  • Hiring decisions: AI can screen, but humans should decide

Document heavily (orange)

  • Access to sensitive systems: AI with API keys to payment systems, cloud infrastructure
  • Legal documents: Contracts, NDAs, compliance-related work
  • anything with PII: Customer data, employee records, medical information

Never automate (red)

  • Firing someone: No AI should ever deliver this
  • Legal representation: Court filings, legal strategy, patent applications
  • Financial transactions: Moving money, approving wire transfers
  • Medical decisions: Anything touching health, safety, or life

The real-world risks

Risk #1: Hallucinations

AI makes things up. This is well-known but easy to forget when you're tired.

What happens: AI writes a blog post citing a study that doesn't exist. You publish it. Readers call you out.

Mitigation: Always fact-check citations. Use tools that cite sources (like Perplexity).

Risk #2: Data leaks

AI agents can remember what you tell them. Some of that data trains models. Some of it might be exposed.

What happens: You paste customer data into an AI to "analyze it." That data becomes part of the model's training data.

Mitigation: Use AI tools with enterprise privacy options. Don't paste sensitive data into public AI tools.

Risk #3: Overconfidence

AI is very persuasive. It will tell you something with total confidence even when it's wrong.

What happens: AI gives you bad advice on a contract. It sounds confident, so you act on it. The contract has a loophole.

Mitigation: Verify anything important with a human expert.

Risk #4: Dependency

The more you use AI, the less you develop your own skills.

What happens: You stop writing because AI "does it better." Eventually, you can't function without it.

Mitigation: Use AI to amplify your skills, not replace them. Keep practicing the core skills yourself.

A practical rule

If you wouldn't do it while drunk, don't do it with AI.

That means:

  • Don't send AI-written emails when emotional
  • don't let AI make decisions when tired
  • Don't trust AI more than you'd trust an intern

What to tell your team

If you're working with others:

  1. Never give AI access to credentials they shouldn't have
  2. Always review before sending external communications
  3. Document what AI did so there's an audit trail
  4. Escalate anything involving money, legal, or people to a human

The bottom line

AI agents are powerful. But power without judgment is dangerous.

Use AI for the cognitive load — research, drafting, analysis. Keep the judgment for yourself.

The line is clear: AI handles the thinking. You handle the deciding.

Author

T

TLDL

AI-powered podcast insights

← Back to blog