Library

Episode Summaries

A growing library of AI podcast episode summaries with key takeaways.

Autoresearch, Agent Loops and the Future of Work

The episode centers on Andrej Karpathy’s AutoResearch project as an example of an emerging “agentic loop” work primitive where AI agents run fast, bounded experiments autonomously while humans define the goals and evaluation. It explains how AutoResearch hands the ML iteration loop to agents by running many five-minute training runs, scoring them (val BPP), and keeping only improvements — turning research into a game of rapid scored trials. The hosts generalize the pattern beyond ML to domains like product, sales, and finance, arguing that the human role shifts to writing strategy documents and defining what “better” means. The discussion highlights the limits and prerequisites for agentic loops (objective metrics, cheap/fast iterations, externalized state) and points to multi-agent collaboration and agent-native memory as the next big technical challenge.

Mar 9, 2026Episode ID: 18006

AI policy and the battle for computing power

The episode examines how AI is reshaping global power by shifting the center of progress into private-sector firms and making computing power — not just data — the primary driver of modern AI capability. It highlights the geopolitical concentration of advanced chip manufacturing (notably TSMC and ASML) as a strategic vulnerability and lever between states, especially the U.S. and China. The conversation covers policy trade-offs for democracies: building compute advantages, coordinating international safety norms, and preserving democratic values while adopting and operating AI. The episode also explores immediate security implications, notably how AI accelerates cyber offense and defense by discovering vulnerabilities at scale, and debates around export controls and military uses of advanced AI.

Mar 9, 2026Episode ID: 18005

Andrew Huberman: Peptides, Sleep Tech, and the End of Obesity

The episode explores the rapid consumerization of health catalyzed by the COVID-19 pandemic and how that shift paved the way for mainstream use of peptides and GLP-1 drugs. Huberman and Daisy Wolf map the evolving peptide/GLP ecosystem — from regulated pharmaceuticals to compounding pharmacies and gray/black markets — and highlight safety, purity, and distribution tradeoffs. They contrast the current era of 'reading' biology (wearables, sensors, CGMs, sleep trackers) with an emergent era of 'writing' biology using targeted neurotechnologies, localized cooling, and pharmacology to actively modulate physiology. The conversation emphasizes sleep and cortisol as near-term high-impact targets for interventions and outlines practical noninvasive routes (eyes, ears, vagus/superficial nerves) for cognitive-state modulation.

Mar 9, 2026Episode ID: 18003

The most successful AI company you’ve never heard of | Qasar Younis

Qasar Younis discusses Applied Intuition’s mission to add intelligence to heavy machinery and vehicles, arguing that the next wave of impactful AI will be in physical industries like farming, mining, construction, and trucking rather than consumer software. He contrasts technical approaches to autonomy (Waymo-style high-sensor/HD-map vs. Tesla-style generalization) and explains why autonomy will meaningfully reduce injuries and improve productivity. Qasar explains Applied Intuition’s deliberate low-profile build strategy, company values (speed, follow-through, customer-first), and how to create a culture where the best idea wins. He also covers broader themes: how to reduce fear of AI by understanding its limits, why many China vs. U.S. comparisons are misleading, and the importance of reading widely to develop judgment and product taste.

Mar 8, 2026Episode ID: 17997

10 OpenClaw Lessons for Building Agent Teams

The episode distills ten practical lessons from early OpenClaw users about building agent teams, emphasizing deliberate design around task separation, coordination, security, memory, and cost management. Guests report that single-purpose agents outperform monolithic multitask agents, and that simple file-based handoffs (Markdown/JSON) are often a robust coordination mechanism. Security practices frame agents as separate employees with isolated environments, scoped credentials, and limited access to sensitive systems. The discussion also highlights the need for explicit memory systems (agents start stateless), and the importance of right-sizing model choice to reserve expensive models for high-value tasks while using cheaper models for monitoring and scheduling.

Mar 8, 2026Episode ID: 17996

20VC: Why the SaaS Apocalypse is BS | Why China Will Win the AI War | Why 50% of VCs Should Not Exist and are Tourists | Why Stock-Based Comp is the Hidden Sin of the Valley with Mitchell Green, Lead Edge Capital

The episode is a wide-ranging discussion of modern venture and growth investing, arguing that the so-called SaaS apocalypse is overblown and many incumbent enterprise software vendors will adapt rather than vanish. Mitchell Green warns that a large share of investors add little or negative value, and growth funds must be disciplined about fund math, exits, and selling. He calls out stock-based compensation dilution as an under-discussed structural problem that distorts startup economics and public/private valuation gaps. The conversation also highlights China — and ByteDance specifically — as a material AI competitor, and emphasizes Gross Dollar Retention as the single most predictive SaaS health metric for investors.

Mar 7, 2026Episode ID: 17994

Wisdom of the $TAO: the future is decentralized AI

The episode explores BitTensor, a blockchain-native approach that redirects crypto mining toward useful AI workloads by subsidizing and incentivizing decentralized model inference and development via the TAO token and permissionless subnets. Guests Ala Shaabana (Crucible Labs/Bittensor) and Mark Jeffrey (Stillcore Capital) discuss real-world subnet projects (Ridges, Targon, Hippius), showcasing competitive model performance, novel token economics, and practical applications like video compression and developer copilots. The conversation highlights onboarding and UX improvements (Crucible wallet, allocators, OpenClaw agents) that lower friction for miners and users, and demos how agents can mine or consume BitTensor services for arbitrage or production tasks. The hosts also touch on governance and exchange-listing complexities for subnet tokens, philosophical concerns about machine-made art (citing Bob Dylan), and some consumer hardware recommendations.

Mar 6, 2026Episode ID: 17993

GPT 5.4 First Test Results

The episode reviews OpenAI's GPT-5.4 release, emphasizing its positioning as a frontier model tuned for professional work through combined advances in reasoning, coding (Codex lineage), and agent/tool workflows. Key upgrades include a 1 million-token context window and a tool-search mechanism that materially reduces token usage while maintaining accuracy. Benchmarks show large performance gains on professional knowledge tasks and coding benchmarks, with notable wins on GDPVAL and OSWorld Verified. In hands-on testing the host found GPT-5.4 fast and effective—especially when paired with Codex for CLI-driven workflows—but also flagged practical UX and behavior problems like verbosity, scope creep, and fragile front-end outputs.

Mar 6, 2026Episode ID: 17992

How cosplaying Ancient Rome led to the scientific revolution

The episode traces how Renaissance engagement with ancient Rome—through education, libraries, and political mimicry—created intellectual soil that eventually enabled the scientific revolution. It emphasizes that recovery of classical texts, cheaper writing materials, and evolving reading practices produced new habits of comparison, annotation, and institutional sharing that made collective knowledge growth possible. Economic and logistical realities (paper costs, distribution hubs, merchant networks) shaped which inventions succeeded: Gutenberg’s press succeeded only after access to markets and shipping hubs like Venice. The conversation also links political innovations in city-states (Florence’s merchant republic and Medici influence) to durable institutions and shows how printing instituted a prolonged information revolution (books → pamphlets → newspapers) rather than a single event.

Mar 6, 2026Episode ID: 17991

AI Is Officially Political

The episode argues that AI has moved from a technical topic into explicit politics as frontier firms like Anthropic and OpenAI are drawn into national security and culture-war debates, highlighted by the Anthropic–Pentagon dispute and Dario Amodei’s leaked memo. It covers the commercial intensification of the market — reported ARR figures put OpenAI and Anthropic in direct revenue competition — and how that economic power reshapes strategic influence. The show also spotlights product and technical shifts accelerating adoption: OpenClaw-like agent platforms catalyzing an "agent era," Google’s NotebookLM producing cinematic multimodal outputs, and emerging certification standards (AIUC1) enabling enterprise trust. Underlying the news are contested choices about safety, surveillance, procurement, and whether design decisions are principled or politically strategic.

Mar 6, 2026Episode ID: 17986

Ben Thompson: Anthropic, the Pentagon, and the Limits of Private Power

The episode centers on the tension between private AI firms and state power, using the Anthropic–Department of Defense standoff to show how theoretical governance concerns have become concrete. Ben Thompson argues that powerful general-purpose AI creates geopolitical and coercive dynamics—analogous in consequence to nuclear proliferation even if not in mechanics—so governments will assert control when they perceive strategic risk. He contends that the economic realities of frontier models (very large capex and the need for broad commercial markets) make purely government-led development impractical, so private companies will continue to drive capability improvements. Finally, Thompson emphasizes that existing laws are poorly suited to AI’s scale and dynamics and recommends targeted new legislation to set clear rules instead of leaving decisions to firms or ad-hoc government action.

Mar 5, 2026Episode ID: 17984

Prediction markets want to be the news

The episode examines how prediction-market platforms (Polymarket, Kalshi and others) are increasingly framing themselves as sources of real-time news and information rather than gambling sites, a positioning that both sidesteps regulation and attracts publishers. Guests and reporting highlight concrete harms: insider trading, rumor amplification, and markets turning unverified moves into news events. Self-regulation on these platforms is limited, enforcement by authorities has been weak or reactive, and some industry actors treat information asymmetries as features rather than problems. Broader cultural and economic forces—stagnant incomes, crypto and gambling normalization, and attention-driven product design—are driving more people toward speculative, gambling-like behavior on these apps.

Mar 5, 2026Episode ID: 17983

Is Anthropic Making the Biggest Mistake in AI History | E2258

The episode focuses on the rapid rise of open-source agent frameworks (notably OpenClaw) and how they’re being productized into safer, user-friendly SaaS wrappers for non-technical users. Guests discuss privacy-first AI alternatives to big lab models, token-based pricing to reduce friction for inference, and the security and governance concerns driving enterprise AI adoption. Demos and products (OpenClaw Studio, VeniceAI, SiteLine) illustrate agent orchestration, analytics for agent-driven traffic, and mechanisms for giving users inexpensive or free inference via token models. Controversial points include platform defaults favoring specific models, Anthropic’s public policy stances affecting government relationships, and the tradeoff between fully open agents versus intentionally restricted enterprise SaaS variants. The conversation paints agentic commerce and the ‘agentic web’ as imminent opportunities that require new analytics, controls, and UX for mass adoption.

Mar 5, 2026Episode ID: 17981

The Big Questions That Will Decide the Consumer AI War

The episode examines how the consumer AI race is shifting from raw model benchmarks to broader product and business questions that will decide winners. Topics include OpenAI's internal efforts to improve reliability (reportedly building a GitHub alternative), GPT-5.3 Instant's emphasis on speed and conversational 'vibes,' and Anthropic's rapid revenue surge narrowing the gap with OpenAI. The conversation highlights infrastructural and commercial inflection points: Stripe's token-based billing for AI usage, potential ad models in chatbots, and the role of multimodality and agentic capabilities in driving consumer adoption. The host frames several open questions—monetization, switching costs, multimodal experiences, agents, and regulation—that will shape which platforms lock in users and ecosystems.

Mar 4, 2026Episode ID: 17980

Ambience CEO Nikhil Buduma on AI in Clinical Workflows

The episode explores how AI—particularly foundation models and downstream agentic systems—can reshape clinical workflows to improve clinician efficiency, patient experience, and hospital margins. Nikhil Buduma explains why Ambience began by operating a medical practice to learn real-world EHR, workflow, and financial pain points before building a platform. The conversation highlights that the hardest work is integration: extracting high-fidelity context from heterogeneous EHRs, preserving decision traces, and defining quality for open-ended clinical tasks. They discuss market dynamics (many entrants in mid-market; defensible moat at large academic centers), the necessity of measurable ROI for CFO adoption, and the ongoing debate over generalist versus domain-specific models and the proper human/AI balance in clinical decision-making.

Mar 4, 2026Episode ID: 17979

How the OpenClaw foundation bullet-proofed its future (w/Dave Morin) | E2257

The episode covers the creation of the OpenClaw Foundation and why a formal nonprofit was needed to protect and professionalize the rapidly growing open-source agent project while preserving Peter Steinberger's technical authority. Dave Morin explains OpenClaw’s product primitives — local persistent memory files, shareable skills, and a ‘heartbeat’ scheduler — and how they enable proactive, personal agents running on users’ machines. The conversation explores ecosystem and startup opportunities: agent orchestration, secure hosting/sandboxes, UX front-ends (e.g., Runtools), and vertical agent products that capture proprietary data. Morin and the hosts also debate trade-offs between local agent approaches (OpenClaw) and cheaper browser-plugin/cloud workflows (e.g., Claude/Anthropic), touching on costs, security, and ethical questions around government/military use and domestic surveillance.

Mar 3, 2026Episode ID: 17967

The Month AI Woke Up

February 2026 marked a pivot where AI moved from an insider phenomenon to a public systemic shock: agentic AI capabilities, enterprise standards, market reactions, and geopolitical friction all accelerated in a short window. Reporting that Anthropic's Claude was used for intelligence analysis during US/Israeli strikes amplified concerns about the military use and supply-chain risk of large models. OpenAI closed a record $110 billion funding round, underscoring massive capital concentration and signaling broad industry bets on agentic workflows. At the same time, new certifications (e.g., AIUC1) and third-party verifiable safety stacks began emerging to unlock enterprise adoption, even as Wall Street repriced SaaS and content companies facing AI disruption.

Mar 2, 2026Episode ID: 17965

Formula 1

The episode traces Formula 1's transformation from dangerous, fragmented European road racing into a tightly engineered, globally commercialized sport. It emphasizes how engineering (aerodynamics, power units, and materials) and team operations increasingly determine on-track success, while television rights, sponsorship (notably tobacco), and Bernie Ecclestone's centralization turned F1 into a scalable media business. The hosts cover major technical inflection points — Chapman-era lightweight design, ground-effect aerodynamics, and the 2014 hybrid powertrains — plus the long arc of safety improvements that made modern F1 survivable. Finally, it chronicles the shift in ownership and strategy under Liberty Media and highlights cultural shifts in the paddock, team economics, and fan experience.

Mar 2, 2026Episode ID: 17966

20VC: Monday.com CEO on Is SaaS Dead: Will Everything Be Vibe Coded | Will Systems of Record Become Valueless Databases in an Agentic World | Will LLMs Own the Value in the Application Layer with Eran Zinman

The episode features Monday.com CEO Eran Zinman discussing six existential threats to the company amid a harsh public-market repricing of SaaS. Central to the conversation is how AI — specifically LLMs and agents — will shift software from tracking work to doing work, forcing product, pricing and GTM changes. Vibe-coding (no-/low-code UIs) is acknowledged as powerful for interface creation but unlikely to displace enterprise-grade platforms that require integration, workflow depth and data plumbing. Monday is repositioning to become an orchestration layer between humans and agents while doubling down on vertical bets (CRM and Service) to capture new markets and change organizational behavior. The discussion frames public valuation declines as uncertainty about who will successfully adapt to these technological shifts rather than a pure business downturn.

Mar 2, 2026Episode ID: 17960

Full Tutorial: Connect Claude Code to Google, Slack, and Reddit in 40 Min (Skills + MCPs)

The episode demos how to connect Claude Code (Cloud Code) to workplace apps—Google Workspace, Linear, Slack, and Reddit—so you can run common PM tasks from the terminal without opening each app. Carl walks through concrete workflows: prepping meetings via Google MCPs, turning PRDs into Linear tickets, posting Slack updates, monitoring subreddits, and a daily-standup command that aggregates data from multiple tools. He explains folder/OS organization for reusable Claude Code workflows and shows how to promote file-based scripts into skills/commands. The episode also introduces a "consult-the-council" multi-model skill that queries several LLMs (ChatGPT, Gemini, Grok) to improve spec quality by aggregating diverse model feedback.

Mar 1, 2026Episode ID: 17956

The design process is dead. Here’s what’s replacing it. | Jenny Wen (head of design at Claude)

Jenny Wen argues that the traditional linear design process (discover → mock → iterate) is becoming obsolete as AI tooling and faster engineering workflows force designers to be more embedded in implementation and to operate on shorter vision cycles. At Anthropic, designers split time between implementation support—pairing with engineers, polishing features in code/IDE—and setting product vision through dense prototypes and short (3–6 month) bets. Jenny expects AI to rapidly improve at taste and idea generation, but maintains humans will retain accountability for what gets built and for higher-order judgment. Hiring priorities and team culture are shifting: the most valuable designers are strong generalists, deep specialists, and adaptable craft-focused new grads, while leaders who do small hands-on acts build trust and psychological safety enables high standards.

Mar 1, 2026Episode ID: 17955

#492 – Rick Beato: Greatest Guitarists of All Time, History & Future of Music

Lex Fridman and Rick Beato discuss music performance, pedagogy, history, and the evolving music industry. They contrast perfect pitch and practical relative-pitch training, emphasize learning by ear and daily short practice sessions, and recount stories of musicianship from Joe Pass to the Beatles. The conversation covers how sonic identity can be recognized from small cues, the artistic value of spontaneity, and the role of studio work in creative breakthroughs. They also debate modern challenges: gear choices for tone, AI-generated music and artifacts, and fairness of Content ID/monetization practices on streaming platforms.

Mar 1, 2026Episode ID: 17954

Who Controls AI?

The episode examines the rapid escalation of a political and commercial standoff after Anthropic refused to remove usage "red lines" that prohibit its models from powering mass domestic surveillance and fully autonomous weapons. That refusal prompted public rebukes from the White House and Department of Defense, and a presidential directive to federal agencies to stop using Anthropic technology, crystallizing a fight over who controls high-impact AI. OpenAI quickly negotiated a separate deal to deploy models inside DOD classified networks while pledging a "safety stack" and engineering support, framing an alternative path for government access. The incident surfaces broader governance questions — whether private companies should be able to restrict government uses of AI, how governments should respond, and what checks and balances should govern critical AI infrastructure.

Feb 28, 2026Episode ID: 17953

20VC: Why Cursor is Dead | An AI Tsunami is Coming & You Need to Prepare | Systems of Record Become Valueless Databases with Agents | Is This The End of Tech Private Equity with Jerry Murdock, Co-Founder of Insight Partners

Jerry Murdock argues that we are at the start of an AI "tsunami" driven by autonomous agents rather than incremental model improvements, which will upend how software is built, bought, and run. Incumbent developer tools and systems of record face obsolescence unless they become agent-friendly or serve as orchestration/integration layers. Open-source models, specialized cheaper chips (an "A6" class), and agent-oriented stacks will redistribute where workloads run and which companies capture value. This shift will alter software pricing to consumption/agent-based models, materially disrupt white-collar labor markets, and prompt political responses such as minimum guaranteed income. Jerry also reflects on Insight Partners' durability during downturns, offers personal advice on parenting, and speculates that AI agents could significantly extend human longevity in the next two decades.

Feb 28, 2026Episode ID: 17950

Are 40% Staff Cuts the New AI Normal?

The episode examines whether Block’s announcement of ~40% staff cuts is a new AI-driven model for corporate downsizing or simply a correction of COVID-era over-hiring framed as an AI story. It surveys recent AI industry moves — Google’s NanoBanana 2 (a faster, cheaper Gemini 3.1–backed image model), strong user growth at Anthropic’s Claude, Meta’s pullback on custom chips, and Microsoft previewing Copilot Tasks — to show shifting priorities toward speed, cost, and deployability over peak model quality. Hosts debate the incentives created by markets rewarding headline AI-driven layoffs and whether that could produce copycat behavior or meaningful productivity gains. The conversation highlights both concrete productivity wins (e.g., reported developer hours saved) and the risk of “AI laundering” where managerial mistakes are relabeled as AI efficiencies.

Feb 28, 2026Episode ID: 17949

AI Agents and the Future of Global Trade with Alibaba’s Kuo Zhang - Ep. 291

The episode features Alibaba.com president Kuo Zhang explaining how AI agents such as Axio are transforming B2B global trade by automating complex sourcing workflows and changing how buyers discover suppliers. Axio is described as an AI-native agent that accepts natural-language and multimodal inputs, decomposes requests into tasks, matches products and suppliers, and can run multi-step sourcing processes that used to take weeks in minutes. The conversation covers practical deployment challenges — including where to draw boundaries on agent autonomy, the need for human-in-the-loop judgments, and combining world models with domestic-specific models and platform compliance. Zhang also discusses the broader economic potential: widespread agent adoption could materially increase global trade efficiency and possibly raise aggregate trade value by around 10% according to Alibaba’s estimates.

Feb 27, 2026Episode ID: 17946

AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762

The episode surveys how LLM research has shifted from raw pretraining scale to post-training and inference-time techniques that boost reasoning and practical performance. Sebastian Raschka emphasizes verifiable-reward training and decoding/ensemble strategies (self-consistency, self-refinement) as central drivers of recent gains in math and coding. The discussion also covers agentic workflows and tooling — local agents, editor integrations, and plugins — as crucial for real-world adoption, while noting reliability and failure propagation remain constraints. Architectural trends (mixture-of-experts, attention efficiency, long-context models) and the limits of fully automatic per-user continual learning round out the conversation, along with practical advice for developers and a preview of Raschka’s book on building reasoning models.

Feb 26, 2026Episode ID: 17942

The OpenClaw-ification of AI

The episode argues that recent product launches from Anthropic, Perplexity, Notion and others represent an industry-wide shift toward persistent, agentic workflows rather than mere feature copycats of projects like OpenClaw. Host discusses four emerging primitives—persistent work (always-on agents), multimodal orchestration, scheduled autonomy (cron/heartbeat-driven tasks), and cross-device presence—that are becoming foundational to how agents are productized. The episode also covers major industry developments: Anthropic's escalating dispute with the Pentagon over military use restrictions, Nvidia's blowout earnings signaling insatiable AI compute demand, and the broader business and ethical questions around agent-driven automation. Overall, the conversation frames these moves as the start of a new agent era with significant product, infrastructure, and policy implications.

Feb 26, 2026Episode ID: 17943

How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital Managing Director Neil Tiwari

The episode examines how creative capital structures are enabling the rapid buildout of AI infrastructure, with Magnetar Capital’s Neil Tiwari explaining non‑equity financing strategies that scale GPU clouds without forcing excessive dilution. They quantify the scale of AI compute CapEx—projected in the high hundreds of billions by 2026 and into the trillions thereafter—and describe deal mechanics that prioritize contracted cash flows over GPUs as primary collateral. The conversation shifts to practical bottlenecks: power distribution, energy storage, and physical construction (steel, transformers, electricians) are the immediate constraints beyond chip supply. Finally, the hosts discuss differences between training and inference—why inference is more distributed and latency/memory sensitive—and what that means for future cloud architectures and financing models.

Feb 26, 2026Episode ID: 17936

20VC: Anthropic Wipes Billions Off Markets | Citrini Research: The Ultimate Breakdown: Agents, "Ghost GDP", Consumer Spend etc. | Figma Earnings Beat & Four Public Stocks to Buy | Jack Altman Joins Benchmark

The episode dissects recent AI-driven market moves—most prominently Anthropic’s security product announcement and its outsized impact on cybersecurity valuations—and places those events in the wider context of agent-based AI, incumbent SaaS risk, and macroeconomic implications. Hosts debate whether agents will commoditize B2B SaaS unless incumbents build and own the agent layer, noting that high-quality agents remain expensive and custom to create. They explore ‘ghost GDP’: productivity gains that boost corporate profits without broadly increasing consumer income, which could depress aggregate spending. The show also covers specific company news (OpenAI’s large spending plans, Figma’s strong earnings and AI defense) and forecasts painful consolidation in mid-market SaaS. Practical takeaways include where startups can win (narrow, vertical agents) and why many public companies haven’t yet shown agent-driven revenue growth.

Feb 26, 2026Episode ID: 17934

Behind the Scenes with an early OpenClaw contributor! | E2252

The episode goes behind the scenes of OpenClaw with early contributor Tyler Yust, exploring how OpenClaw enables long-running AI agents (replicants) that automate personal and business tasks. Guests discuss technical patterns that made agents practical in 2025—tool-calling, long-running tasks, and subagents—and demo a pocket-sized OpenClaw device built from off-the-shelf hardware. The conversation also covers trade-offs between cloud and local model hosting (privacy, latency, and Apple silicon performance), and debates around AI ethics and national security, using Anthropic’s tensions with government requests as an example. Finally, the show examines how agent-native workflows may compress or transform parts of the SaaS market and addresses interface futures like voice and the limits of non-invasive BCIs.

Feb 26, 2026Episode ID: 17933

SaaStr 843: Software Stocks Have Massively Crashed. Here's What Founders Need to Know.

The episode examines the current SaaS and VC landscape through the lens of AI adoption, product growth, and event economics. Jason Lemkin argues that merely adding AI features is insufficient—true AI companies must show re-accelerated growth driven by agents or integrations. He explains how AI agents are already replacing human capacity, driving revenue (including an agent that closed a $100K deal) and reshaping discoverability and vendor selection. The conversation also covers the high fixed costs of large conferences, why private equity is shying away from many legacy B2B businesses, and how low-friction “vibe-coding” tools are flooding niche markets with clone-ready apps.

Feb 25, 2026Episode ID: 17931

The Rise of the Anti-AI Movement

The episode examines the rising public skepticism toward AI, arguing that opposition is real, measurable, and manifesting in diverse, politically effective forms—from artist pushback and workplace anxiety to local fights over data centers. It decomposes the so-called "anti-AI movement" into distinct constituencies (safety-focused critics, capability skeptics, artists, community environmentalists, and labor concerns) and argues these groups are responding to specific, often solvable problems rather than ideological technophobia. The host emphasizes that industry rhetoric and tone matter: flippant comparisons or dismissive language can exacerbate mistrust and fuel backlash. The conversation links economic anxiety, social-media disillusionment, environmental/resource concerns (energy and water for data centers), and health/child-development worries to explain why resistance is broadening.

Feb 24, 2026Episode ID: 17930

Dan Sundheim - The Art of Public and Private Market Investing - [Invest Like the Best, EP.460]

Dan Sundheim walks through how he thinks about investing across public and private markets, the differentiated economics of large LLM businesses, and firm-level lessons from adversity. He contrasts current private-market opportunity sets with highly competitive public markets and explains why late-stage private deals can be more attractive today. On AI, he frames major LLM platforms as capital-intensive businesses with durable moats that blend characteristics of Netflix (front-loaded infrastructure) and Spotify (personalization and stickiness). He discusses implications for hyperscalers, portfolio construction changes after the GameStop episode, and why rigorous public research can both move markets and launch careers.

Feb 24, 2026Episode ID: 17926

Kill Your Startup’s Knowledge Chaos with OpenClaw (with Oliver Henry and Jeff Weisbein) | E2254

The episode explores how OpenClaw and agentic tooling are transforming startup workflows by automating tasks, sharing long-term memory across agents, and enabling cross-functional coordination. Guests Oliver Henry and Jeff Weisbein demo real-world uses—marketing automations, bug-fixing pipelines, and agentic coaching—and describe how skills extend agent capabilities. They discuss architectural choices like running agents locally to avoid platform restrictions, the role of a central oracle agent for company-wide visibility, and the discoverability/monetization challenges of a skills marketplace. The conversation also covers trade-offs around privacy (facial age-gating), employee visibility, and when to build custom skills versus adopting existing ones.

Feb 24, 2026Episode ID: 17924

The Perils of the AI Exponential

The episode examines rapid recent shifts in AI capability, commercial adoption, and market reaction, anchored by METR/Meter’s long-horizon benchmark results showing dramatic gains for models like Claude Opus 4.6 and GPT-5.3. It highlights Anthropic’s Quad (Claude) Code as a revenue and product-development engine while unpacking why a security plugin announcement rattled cybersecurity stocks despite limited product overlap. The hosts dig into OpenAI’s aggressive revenue projections paired with sharply rising inference and training costs, stressing scalability and margin implications. The episode also questions how much of the benchmark’s jumps reflect true capability versus noise or saturation, and explores alarmist versus plausible economic disruption scenarios from long-horizon research notes.

Feb 23, 2026Episode ID: 17923

Ben Horowitz: RSI, Crypto as AI Money, & Classified Physics

The episode centers on the rapidly accelerating capabilities of AI, the political and practical limits of pausing its development, and the societal and economic consequences that follow. Panelists highlight that generative video/voice and recursive self-improvement are already crossing usability thresholds, creating both productivity gains and serious authenticity, copyright, and security risks. They argue that broad attempts to regulate AI risk looking like regulating mathematics or fundamental science and are therefore fraught, while targeted levers (e.g., export controls on hardware) are more plausible but imperfect. A striking theme is the emergence of autonomous AI agents that self-replicate and transact using crypto, implying a new, concentrated AI-driven economy that may widen the gap between capital owners and labor.

Feb 23, 2026Episode ID: 17921

20VC: Inside Coatue's $7BN Growth Fund: Why Price Matters Least | Why Mega Markets are the Most Important | How Mega Funds Can Still Do 5x Returns | How to Assess Durability of Revenue and Margins in AI with Lucas Swisher

The episode explores how AI is reshaping public and private SaaS valuations, forcing investors to reassess terminal value for recurring-revenue businesses and to watch retention and net-new ARR as leading signals. Lucas Swisher explains Coatue's growth fund approach: price is considered last, while market size, the ability to ride multiple S-curves (platform reinvention), and founder-market fit drive investment conviction. The conversation covers how mega growth funds can still achieve venture-like returns by concentrating capital in a few durable platform winners and doubling down over time. It also examines margins in an AI era, arguing early gross margins can be misleading while long-term operating margins and efficiency at scale matter more. Finally, the guests discuss investor skill — spotting inflection points via usage and retention curves — and controversial shifts in seed economics and the so-called 'kingmaking' narrative.

Feb 23, 2026Episode ID: 17919

From Data Models to Mind Models: Designing AI Memory at Scale

The episode explores agentic memory design — how to make AI agents remember, reason, and learn over time — distinguishing between short-term session memory (hot, low-latency traces) and long-term permanent stores (graph + vector layers). Vas Markovich emphasizes practical engineering trade-offs: latency, storage choices (Redis, Qdrant, LanceDB, Neo4j), multi-tenant isolation, and when simple approaches (MD files, Postgres, prompts) suffice versus when dedicated memory infrastructure is needed. He critiques naive strategies like timestamp decay and one-off summarization, advocating for neuroscience-inspired trace grouping, graph metrics (e.g., centrality), and RL-informed updating to manage relevance. The conversation also covers human-in-the-loop realities, permissioning, tooling patterns (explicit store/retrieve tool calls), and real-world use cases in pharma, logistics, and cybersecurity, finishing with Cognee’s roadmap for session/long-term stores and decision traces.

Feb 22, 2026Episode ID: 17888

Full Tutorial: Use OpenClaw to Build a Business That Runs Itself in 35 Min | Nat Eliason

The episode walks through how Nat Eliason’s OpenClaw agent, Felix, was given $1,000 and autonomously built and launched revenue-generating products (website, PDF, Stripe integration, and social presence) in a matter of days. The conversation covers the agent architecture that made this possible: a three-layer memory system with nightly consolidation, multi-threaded chats, cron jobs and heartbeat monitors for long-running tasks, and delegation to Codex for heavy programming. Security practices are emphasized, particularly separating authenticated command channels from informational channels to prevent prompt-injection attacks. The hosts also discuss monetization strategies (including crypto rails) and the attendant risks of giving agents financial access and programmable tokens.

Feb 22, 2026Episode ID: 17886

Why AI Could Be Better for Plumbers than Programmers

The episode argues that AI's biggest near-term economic impact may be shifting operational leverage to small, practical business owners—like plumbers and HVAC techs—rather than simply replacing white-collar workers. Rather than pure headcount reduction, the highest-value AI use cases are those that make existing people dramatically more effective by removing administrative friction (scheduling, dispatch, estimates, customer communication). Agentic tools and "open-claw" ecosystems are highlighted as the mechanism that will productize AI for non-technical users and niche trades. The conversation also links rising interest in blue-collar careers among Gen Z to AI-driven uncertainty in corporate entry-level roles and to persistent infrastructure and manufacturing labor needs. Finally, the episode contrasts near-term software/agent-driven gains for trades with the longer-term, contested possibility of embodied robotics replacing some blue-collar tasks.

Feb 22, 2026Episode ID: 17885

Reddit Steps into AI-Powered Commerce

The episode examines Reddit's strategic push into AI-powered commerce, highlighting new features that surface shoppable product carousels and contextual recommendations drawn from community discussions. It covers Reddit's broader AI investments — from an AI-driven search experience and rapidly growing AI answers usage to targeted acquisitions that strengthen machine learning and ad tech capabilities. The hosts discuss monetization tactics, including embedding shoppable ads, selling training data to large AI players, and turning community signals into revenue while weighing privacy and trust concerns. Financial performance and user growth (notably a 30% YoY increase in weekly active users) are presented as validation for Reddit doubling down on AI as a core growth lever.

Feb 21, 2026Episode ID: 17884

20VC: Codex vs Claude Code vs Cursor: Who Wins, Who Loses | Will All Coding Be Automated - Do We Need PMs | The Real Bottleneck to AGI | The Three Phases of Agents and What You Need to Know with Alex Embiricos, Head of Codex at OpenAI

The episode explores how AI—especially OpenAI's Codex and agentic systems—will reshape software development, arguing that automation will create new kinds of builders rather than wholesale replacing engineers. The conversation identifies human prompting, validation, and the friction of human action as the primary bottlenecks to widespread agent adoption, not models or compute. It outlines three phases of agent adoption: coding agents, agents that use computers broadly, and productized workflows for non-technical users, while emphasizing the importance of latency, ergonomics, and sandboxing for real-world developer adoption. The discussion also covers enterprise stickiness—agents connected to systems of record, permissions, and secure integrations—and investment implications for companies that can avoid displacement by model providers.

Feb 21, 2026Episode ID: 17883

We Asked 3 Experts How to Get More Value out of OpenClaw | E2253

The episode explores how to get practical value from OpenClaw-style autonomous agents, focusing on cost, deployment choices, observability, and real-world interaction. Guests recommend dedicated local hardware (e.g., Mac Mini, Raspberry Pi) for easier onboarding and debugging, especially for non-developers, while warning about token costs when using cloud-based LLM calls. They introduce operational practices like telemetry-driven 'Part B' checks to replace slow human standups and discuss bringing agents into the physical world with voice/speaker hardware (OpenHome) to enable context, memory, and proactivity. The conversation also covers governance, data-access constraints, vendor lock-in, and ethical concerns around voice/personality cloning and autonomous agents with financial capabilities. Off-topic segments surface media recommendations and audio-hardware tips, rounding out a practical discussion for builders and startup founders.

Feb 21, 2026Episode ID: 17882

OpenAI's $100 Billion Funding + Sam Altman Refuses to Hold Darios Hand...

The episode covers OpenAI's reported pursuit of an unprecedented roughly $100 billion funding round that would value the company near $850 billion, and the implications of that raise for strategic investors, partnerships, and future monetization. Hosts discuss OpenAI's experiments with tiered pricing and ads (including an $8 ad-supported tier and a $20 ad-free tier) and weigh the revenue benefits against risks to user trust and experience. The conversation highlights an escalating rivalry with Anthropic — visible in public jabs and an awkward India event moment — framing competition as both reputational and product-based. A substantial portion of the episode focuses on India as a major growth market (100M+ weekly users, youthful demographics, elevated coding/work usage) and OpenAI’s investments there in offices, compute, and localized pricing as part of a broader expansion and IPO-readiness strategy.

Feb 20, 2026Episode ID: 17881

Does Gemini 3.1 Pro Matter?

The episode evaluates Gemini 3.1 Pro not as an absolute supremacy claim but as a meaningful incremental step that boosts multimodal reasoning, coding, and cost-efficiency. It highlights Google's productization of multimodal features (e.g., Photoshoot, Replet animation) that show practical value for creators and enterprises. The conversation reframes what matters today: cost-per-task, use-case fit, and assembling a ‘model portfolio’ instead of chasing weekly frontier leadership. It also covers enterprise behavior — from Walmart’s Sparky adoption gains to firms like Amazon and Accenture tracking or tying promotions to AI use — and the human/organizational frictions that limit organic adoption.

Feb 20, 2026Episode ID: 17880

YouTube’s latest experiment brings its conversational AI tool to TVs

The episode covers recent AI-driven product experiments across major consumer platforms. Reddit is testing an AI-powered shopping search that surfaces community-recommended products in interactive carousels with pricing and direct buy links. YouTube is expanding its conversational AI assistant from mobile and web to smart TVs, consoles and streaming devices, enabling voice queries via the remote and suggested prompts while also rolling out other AI features like upscaling, comment summarization, and AI-created shorts. Google Chrome is adding productivity features (Splitview, PDF annotations, Save to Drive) while integrating Gemini to stay competitive with AI-first browsers. The discussion touches on strategic implications — from monetization and UX changes to concerns about AI pushing platforms toward commerce and deeper integration into passive media consumption.

Feb 20, 2026Episode ID: 17878

Patrick Collison on Stripe’s Early Choices, Smalltalk, and What Comes After Coding

Patrick Collison reflects on engineering and product choices from Stripe's early years, arguing that decisions about languages, datastores, and API design have multi-decade consequences for a company’s architecture and costs. He contrasts the productivity benefits of fully interactive development environments (Smalltalk/Lisp style runtimes) with the prevailing editor-plus-runtime model and says such environments dramatically speed debugging and iteration. Collison also emphasizes that investing more time in designing long-lived APIs and data models is extremely high leverage, since migrations are more like instruction-set changes than simple product launches. The episode examines whether current LLM/AI adoption has moved the macroeconomic productivity needle (so far, not clearly) and highlights a promising convergence in biology—sequencing, deep learning, and genome editing—that creates a powerful “read-think-write” experimental loop.

Feb 20, 2026Episode ID: 17877

Google Launches Gemini 3.1 and YouTube AI

The episode covers Google’s release of Gemini 3.1 Pro as an incremental but meaningful upgrade to its flagship large language model, highlighting performance and tool-integration improvements. It emphasizes the importance of independent, real-world leaderboards (like Apex Agents) over vendor-published benchmark claims for evaluating professional, knowledge-based capabilities. The conversation also details how Google is rolling Gemini into consumer surfaces—particularly YouTube and TV experiences—with features such as on-screen Q&A, comment summarization, and auto-enhance for low-resolution uploads. Finally, the hosts discuss Google’s broader AI product and release strategy, including incremental versioning, preview access dynamics, and competitive positioning against other model providers.

Feb 20, 2026Episode ID: 17876

Scaling AI Across Support and Sales: Fin Now Sells Itself

The episode explains how Intercom’s Fin — an AI customer agent that already automates roughly 81% of support interactions — is being extended from support into sales as Fin Sales Agent. Fin Sales Agent acts like a consultant: it opens sales conversations, qualifies prospects, profiles them into buckets (e.g., enterprise), and creates MQLs that feed the sales funnel. The guests describe a careful rollout strategy (A/B tests, closed betas, incremental integrations) and highlight product improvements planned like booking/calendar integration and direct Salesforce sync. The conversation emphasizes aligning sales and support around a unified customer experience while navigating differing KPIs and trust concerns about AI handling early-stage sales tasks.

Feb 20, 2026Episode ID: 17879

How People Actually Use AI Agents

The episode reviews a new Anthropic study and broader ecosystem news to show how AI agents are being used more cautiously in practice than capability demos suggest. Listeners learn that sessions are short (median turns ~45 seconds), humans heavily oversee agents, and autonomy increases with trust and interaction design as much as raw model improvements. The discussion covers platform policy friction—particularly Anthropic's OAuth/token wording and the OpenClaw community response—and how that shapes agent ecosystems. It also highlights agents' early diffusion from coding into back-office, marketing, sales, and finance, and notes platform feature updates from Gemini, Grok, and Meta that signal continued product innovation.

Feb 19, 2026Episode ID: 17875

Capital, Compute, and the Fight for AI Dominance

The episode explores how this AI investment cycle is distinct: extreme talent competition, massive capital flows, and compute economics are reshaping company strategy and fundraising. Guests argue there are effectively no unused GPUs—every dollar into compute drives immediate model work—so compute is a first-order line item for frontier model companies. That dynamic lets small, focused teams build and ship high-impact models quickly, fueling a fundraising flywheel that blurs traditional venture and growth boundaries. The conversation also raises systemic risks: model providers with outsized capital might vertically expand and compete with the app ecosystem, while public narratives often misstate board-level realities and create distracting noise for founders.

Feb 19, 2026Episode ID: 17874

Head of Claude Code: What happens after coding is solved | Boris Cherny

Boris Cherny describes how Claude Code (and related products like Quad and Cowork) evolved from a terminal hack into a high-impact developer agent that now authors a meaningful share of public commits and materially boosts engineering throughput. The conversation argues that routine coding is largely solved by current LLMs and that the next frontier is agentic behavior — models proposing what to build, triaging bugs, and automating non-coding work. Product lessons emphasize shipping early, leaning into latent demand, building where users already work (terminal/IDE/Slack), and exposing the model with minimal scaffolding rather than rigid workflows. Anthropic’s safety approach for agents combines mechanistic interpretability, controlled evals, and in-the-wild observation, while operational lessons include being generous with model access early to learn and optimize costs later.

Feb 19, 2026Episode ID: 17873

Voice AI’s Big Moment: Why Everything Is Changing Now (ft. Neil Zeghidour, Gradium AI)

The episode explains why voice AI is hitting an inflection point: improvements in models, data, and engineering are finally making talking to machines feel natural and convenient. Neil Zeghidour contrasts the dominant cascaded stack (ASR → text model → TTS) with emerging speech-native and full‑duplex approaches that preserve paralinguistic signals and remove turn-taking latency. Practical concerns dominate the conversation: high-quality curated data, efficient on-device models, selective use of large models, and careful script generation for recordings. The discussion also covers product strategy (building blocks vs verticals), privacy risks around voice cloning, and skepticism about audio watermarking as a provenance solution.

Feb 19, 2026Episode ID: 17871

SeatGeek and Spotify team up

The episode covers three main technology and business stories: Google’s launch of the Pixel 10a as a $499 entry-level phone with upgraded AI camera features and seven years of software support; Mastodon’s push to become more approachable and add creator-focused tools while grappling with measurement and onboarding issues in its federated model; and a new SeatGeek–Spotify integration that embeds SeatGeek-powered ticket links into artist pages and tour listings on Spotify. Hosts discuss the practical limits of the SeatGeek integration (it only applies where SeatGeek is the primary seller) and place Spotify’s ticketing efforts in context by noting it has helped generate over $1 billion in ticket sales through many partners. The conversation highlights product and UX trade-offs (e.g., Mastodon’s decentralization vs. discoverability) and competitive dynamics in ticketing dominated by incumbents like Ticketmaster. Overall, the episode links product launches, platform partnerships, and the user experience implications of decentralization and commerce integrations.

Feb 19, 2026Episode ID: 17872

Durable Execution and the Infrastructure Powering AI Agents

The episode explains how durable execution—implemented by Temporal—became a critical infrastructure layer for modern AI agents by providing exactly-once, recoverable state management so long-running workflows survive failures. Guests discuss Temporal’s origins at Uber (Cadence) and its production use powering OpenAI Codex, Snap story processing, Coinbase transactions, and other large workloads. A major theme is the shift from short interactive prompts to long-running, asynchronous agentic loops that require orchestration, retries, and durable state. The conversation also covers improved observability from model-driven execution traces and highlights a remaining gap: a standard durable RPC / asynchronous tool-invocation protocol (Project Nexus) to stitch swarms of specialized agents into reliable distributed systems.

Feb 19, 2026Episode ID: 17870

From SaaS to AI-First: How Companies Are Reshaping Innovation

The episode examines how AI is reshaping the traditional SaaS model rather than immediately replacing it, exploring short‑term and long‑term impacts on product development, sales, and company strategy. Hosts argue that while AI dramatically accelerates code and product creation, enterprises’ change‑management, security, and distribution complexities make wholesale replacement of established SaaS unlikely in the near term. They highlight new operational challenges from abundant AI‑generated code (quality, testing, maintainability) and rapidly shifting AI economics — for example, token inference costs collapsing by orders of magnitude. Finally, the conversation considers market concentration and platform forward‑integration, advising startups to build defensible control points (bundles, networks, hardware ties) to survive and scale amid fast revenue growth among AI incumbents.

Feb 19, 2026Episode ID: 17869

Money no longer matters to AI's top talent

The episode examines the fierce competition for AI researchers concentrated in a few Bay Area labs, where record compensation and intense poaching are reshaping the labor market. Guests argue that for many top researchers, mission, values, and product direction matter more than marginal pay increases, driving resignations and moves between companies. Independent, viral projects (e.g., OpenClaw) can rapidly expose product gaps at big labs and accelerate hiring and integration of outside talent. The conversation also explores tensions between speed and safety, how commercialization and impending IPOs will change incentives, and potential long-term effects on the engineering talent pipeline as coding work is increasingly automated.

Feb 19, 2026Episode ID: 17868

20VC: Anthropic Raises $30BN at $380BN Valuation | Thrive Raises New $10BN Fund | OpenAI Buys OpenClaw | Stripe Raises at $140BN: Is Adyen Wildly Undervalued? | Monday, Figma, Shopify: Which are Buys vs Sells?

The episode dissects recent market moves driven by AI, including Anthropic’s $30B raise and how capital is concentrating in a few AI leaders, while public SaaS stocks are being repriced as narratives shift toward AI-native propositions. Guests highlight that strong models alone don’t translate to enterprise impact because implementation, data quality, and legacy systems are the hard work. Rapid improvements in agentic / autonomous agents (including open-source projects like OpenClaw) are creating near-term commercial use cases and new safety and governance challenges. The conversation also compares private vs public valuation dynamics (Stripe vs Adyen), discusses missed product opportunities for incumbents (e.g., Figma), and touches on VC behavior and large growth funds (Thrive’s $10B).

Feb 19, 2026Episode ID: 17867

When Will Openclaw go Mainstream? | E2252

The episode digs into whether OpenClaw — an open, agentic framework — can move from hobbyist traction to mainstream consumer and enterprise use. Panelists argue that while OpenClaw has momentum, its current install-and-tinker UX, security concerns, and technical barriers prevent broad adoption. They explore what a killer consumer experience might look like (app or OS-level integration) and identify high-value use cases like notification triage and automating mundane tasks. The conversation also covers infrastructure tooling (Massive/ClawPod) that unlocks real workflows by scraping and unblocking content, and debates whether large companies or forks will subsume OpenClaw over time.

Feb 19, 2026Episode ID: 17866

#412 How Roger Federer Works

This episode distills lessons from Chris Clarey’s book about Roger Federer, focusing on the habits, team choices, and mindset that enabled his extraordinary longevity and consistency. It emphasizes mental discipline—treating each point as important in the moment and then letting it go—and how that perspective came from hard data (point-level stats) and practice. The podcast highlights that Federer's apparent effortlessness was the product of meticulous planning, routines, and decades of deliberate work, including a willingness to embrace performance psychology. It also stresses long-run optimization: deliberate rest, recovery, and selective scheduling guided by trusted advisors like fitness coach Pierre Paganini, and how a balanced off-court life reinforced on-court performance.

Feb 19, 2026Episode ID: 17865

Google Gemini Integrates AI Music Generation

This episode covers Google’s integration of DeepMind’s Lyria 3 music-generation model into Gemini and YouTube’s Dream Track, enabling users to generate 30-second music tracks with lyrics and auto-generated cover art. Lyria 3 improves realism and user control over elements like layering, tempo, style, and vocal characteristics, but is intentionally limited to short outputs. Google is implementing guardrails—including output filtering and a Synth ID watermark—to reduce cloning of real artists and to label AI-generated content. The host contrasts Google’s conservative, integrated approach with specialized platforms (e.g., Suno, Udio) that currently offer more production-grade features and discusses industry implications for training data, legal risk, and artist compensation models.

Feb 18, 2026Episode ID: 17864

Sonnet 4.6 Changes the Agent Math

The episode examines major shifts in the AI landscape driven by new model releases and device bets. Anthropic's Claude Sonnet 4.6 debuts a 1,000,000-token context window and big improvements in 'computer use' and coding benchmarks at a substantially lower price, reshaping the economics for agent-heavy workflows like OpenClaw. Grok 4.2 launches into public beta with a multi-agent debate/teamwork architecture and a rapid weekly-improvement cadence, generating polarized public reaction. The conversation also covers Apple accelerating AI wearables (glasses, pendant, camera AirPods) to provide hands-free sensory context for Siri, and broader market moves including Meta's GPU commitments, Chinese price competition, and implications for enterprise AI adoption.

Feb 18, 2026Episode ID: 17863

Cognitive Synthesis and Neural Athletes

The episode explores how AI is reshaping organizations, leadership, and the nature of cognitive work, with Deloitte positioning itself as an "industrial architect" that helps clients move from strategy to productionalized AI. Guests emphasize that AI systems are probabilistic and learning, requiring organizations to unlearn deterministic if-then playbooks and rethink metrics, baselines, and operational processes. Leadership must lean into vulnerability and empathy, because human emotional intelligence uncovers system paradoxes and social dynamics that dashboards miss. The conversation also introduces the idea of "neural athletes" — people who perform rapid cognitive synthesis, switching between creative, evaluative, and empathetic modes — and argues for anti-fragile, multi-model AI architectures rather than single-interaction systems.

Feb 18, 2026Episode ID: 17861

Cognitive Synthesis and Neural Athletes

The episode examines how AI is changing leadership, team dynamics, and system design, arguing that deterministic playbooks must give way to probabilistic, learning-driven approaches. Deborah Golden introduces the concept of the "neural athlete" to describe people who must perform rapid cognitive synthesis while working with AI, and discusses the resulting rise in cognitive load and the need for new workflows and training. Vulnerability and empathy are framed as strategic leadership assets that create psychological safety and surface human friction that metrics miss. Technically, the conversation emphasizes multi-model orchestration and anti-fragile architectures over single-model solutions, along with everyday low-risk AI use to build organizational intuition about bias and model behavior.

Feb 18, 2026Episode ID: 17860

From Copilots to Agents: Rebuilding the Company Around AI

The episode examines how Kavak rebuilt its company around AI agents, moving from under-adopted copilot tools to autonomous agents that now handle roughly 90–95% of customer interactions. Carlos García Ottati explains why operating in Latin America required vertically integrating multiple businesses—e-commerce, reconditioning/warranty, financing, and logistics—beneath a single consumer-facing product to solve high fraud, scarce financing, and weak payment rails. The conversation covers the operational challenges of deploying AI at scale: building ontologies, data pipelines, and safety 'brakes,' and accepting a year of flat growth while restructuring. It also digs into founder-level lessons about re-entering operational roles during transitions and intentionally adopting new leadership personas to meet the company’s evolving needs.

Feb 18, 2026Episode ID: 17858

Josh Kushner - Concentration and Conviction - [Invest Like the Best, EP.459]

Josh Kushner discusses Thrive Capital’s intentionally small, company-like investment team and a concentrated, input-driven approach that prioritizes product and founder quality over fund optics. He explains why concentration amplifies upside but requires discipline and high conviction, recounting iconic bets like Instagram, Stripe, GitHub, and the firm’s work with OpenAI. The conversation covers Thrive’s current AI focus—AI-native labs and domain models, resilient infrastructure, and applying AI to transform existing holdings—and a new holdings/permanent-capital experiment to buy and internally modernize businesses. Interwoven are personal stories and philosophical influences (e.g., The Fountainhead) that illuminate Kushner’s views on individuality, conviction, and the small advantages that compound over time.

Feb 18, 2026Episode ID: 17857

SaaStr 842: The 90/10 Rule for AI Agents: What to Build vs Buy with SaaStr's CEO and CAIO

The episode centers on SaaStr's updated 90/10 rule for AI: buy 90% off-the-shelf and only build the 10% that delivers outsized, proprietary value. Hosts Amelia Lerutte (CAIO) and Jason Lemkin walk through real examples—an internal AI VP of Marketing and a sponsor/customer portal—showing how agent tooling like Claude Cowork made complex builds feasible and fast. They weigh the trade-offs of vibe-coding (rapid, low-code builds) versus buying SaaS, emphasizing hidden maintenance costs, security considerations, and the ongoing time sink for custom apps. The conversation stresses that AI is now table stakes: products without meaningful AI risk losing customers, while jaw-dropping AI experiences drive retention and growth.

Feb 18, 2026Episode ID: 17856

The AI Productivity Boom Finally Shows Up

The episode examines signs that AI is beginning to show measurable macroeconomic effects, driven by revised labor data that imply an unexpectedly large productivity uptick for 2025. It covers policy tensions highlighted by Anthropic's dispute with the Pentagon over permissible military uses of models, and competitive moves in the model race, notably Alibaba's Qwen 3.5 Plus with large-scale multimodal capabilities and aggressive pricing. The hosts discuss the unsettled but increasingly empirical debate over AI-driven job displacement, noting slower hiring in AI-exposed roles but emphasizing confounders and the need for better data. Broader industry developments — from Hollywood's AI concerns to Apple’s teased hardware event — are used to illustrate the transition from experimentation to structural utility for AI technologies.

Feb 17, 2026Episode ID: 17855

Evals, Feedback Loops, and the Engineering That Makes AI Work

The episode focuses on where engineering effort matters most in AI products versus where brute-force compute and data dominate. Martin Casado and Ankur Goyal argue that production engineering—evals, feedback loops, and integration quality—often matters more to product success than using the newest or largest foundation model. They discuss how open-source and Chinese models drive very high token volumes but low dollar-weighted spend because of delivery, reliability, and integration gaps. The conversation also contrasts approaches for agent interfaces, showing structured, typed access (e.g., SQL) outperforms unconstrained 'computer' access (bash/Unix) in many production tasks. Finally, they frame evals as the scientific method applied to non-deterministic software and suggest a shift to engineering wins once brute-force gains taper or funding normalizes.

Feb 17, 2026Episode ID: 17853

WSJ x a16z: The Next 25 Years of Defense Innovation

The episode examines a16z's American Dynamism practice and how the investment landscape in Silicon Valley shifted toward national-security–aligned companies after Russia's 2022 invasion of Ukraine. Katherine Boyle and Andy Serwer discuss how battlefield realities—especially resilient communications like Starlink—reoriented investor and industry thinking about defense-focused startups. The conversation highlights a move away from expensive, bespoke 'exquisite' platforms toward affordable, mass-producible 'small systems' such as drones, autonomous surface vessels, and space infrastructure. Talent migration from firms like SpaceX and Palantir, supply-chain onshoring, and bipartisan political support for rebuilding the defense industrial base are presented as key enablers and challenges for this new wave of companies.

Feb 17, 2026Episode ID: 17851

Will OpenAI Tank OpenClaw? | E2251

The episode centers on OpenAI hiring Peter Steinberger, creator of OpenClaw, and the implications for the open-source personal AI/agent ecosystem. Hosts and guests debate whether this move effectively amounts to an acquisition, and whether it will commoditize interfaces, lock in user data, or help scale the project. They highlight how OpenClaw enables rapid, non-technical product creation via agent-powered "skills," demonstrate real use cases (personal CRM, family media aggregator), and describe practical tactics to reduce hallucinations and token costs. The conversation also covers community actions to preserve openness (funding, decentralized hosting, security), and touches on ethical and platform issues raised by AI cloning of public figures.

Feb 17, 2026Episode ID: 17850

OpenClaw Could Be 1st 1-Person $1B Company, OpenAI Buys

The episode analyzes OpenClaw — a viral, single‑founder AI agent/orchestration project — and whether it could become the first one‑person $1B company. Hosts describe OpenClaw's multi‑agent workflows, open‑source roots, and integrations (originally running on Claude) that let agents perform end‑to‑end tasks on users' behalf. The conversation covers legal and competitive reactions, including a reported cease‑and‑desist and rumors that OpenAI hired the founder or is acquiring the tech. Broad implications are debated: agent-powered products could massively amplify solo‑founder leverage, but giving agents deep control raises safety, trust, and consolidation concerns.

Feb 17, 2026Episode ID: 17849

OpenClaw Goes to OpenAI

The episode explores OpenClaw’s rapid rise from a weekend experiment to a major open-source focal point and the implications of its founder joining OpenAI to work on personal agents. Hosts discuss a recent industry shift toward agentic systems, multi-agent orchestration, and the ways task-specialized models (especially coding models) and execution frameworks are becoming differentiators. The headlines cover speed-optimized coding models like GPT-5.3 Codex Spark and claims of ~1,000 tokens/sec throughput, hardware diversity (non-NVIDIA and wafer-scale chips), and vendor moves from Google/DeepMind and Anthropic. Throughout, tensions emerge between openness and consolidation, speed versus capability trade-offs, and the business implications of large funding and productization decisions.

Feb 16, 2026Episode ID: 17848

Novartis CEO Vasant Narasimhan on Transforming a 250-Year-Old Company

In this episode Vasant Narasimhan discusses Novartis’s strategic transformation from a diversified conglomerate into a focused, pure‑play medicines company that unlocked roughly $180 billion in shareholder value. He outlines the company’s R&D concentration on three platform technologies—cell & gene therapies, RNA (including siRNA) medicines, and radioligand therapies—applied across oncology, immunology, neuroscience, and cardiorenal. Narasimhan assesses the current state of those modalities (cell therapies maturing with manufacturing gains; RNA medicines de‑risking toward infrequent dosing) and positions AI as an enabling but not instantaneous solution for discovery. He also gives practical advice to entrepreneurs and investors: be crystal clear about where a drug fits in the treatment paradigm, perform the decisive “killer experiments,” and don’t underinvest in CMC/manufacturing work. The conversation touches on competitive shifts (notably China’s biotech rise), regulatory timing, and the long timelines required for drug approvals even with AI assistance.

Feb 16, 2026Episode ID: 17847

Let's talk about Ring, lost dogs, and the surveillance state

The episode examines the controversy around Ring’s Super Bowl “Search Party” ad and broader concerns about home camera companies expanding into neighborhood-level surveillance. Hosts discuss Ring’s canceled integration with Flock Safety after public backlash and questions about law-enforcement and federal agency access to private video. A central thread is Ring’s push to use AI—camera-based search, anomaly detection, and what the company calls a ’co-pilot’—to reduce crime, and the attendant trade-offs between promised safety gains and privacy/civil-rights risks. The conversation highlights how connecting camera feeds, facial-recognition systems, and other databases amplifies the potential for mass surveillance and wrongful targeting.

Feb 16, 2026Episode ID: 17846

#235 - Opus 4.6, GPT-5.3-codex, Seedance 2.0, GLM-5

The episode surveys major AI product and research news, emphasizing a wave of capability and infrastructure advances across large models, generative media, and open-weight competitors. Hosts highlight Anthropic’s Opus 4.6 with a 1M-token context window and agent-team features, OpenAI’s GPT-5.3 Codex plus a low-latency Codex Spark on Cerebras, and Google’s Gemini 3 Deep Think which posts large benchmark gains amid sparse safety documentation. Significant progress in generative media is covered—ByteDance’s Seedance 2.0, Seedream 5.0, Alibaba’s Qwen Image 2.0, and xAI’s Grok Imagine API push text/image-to-video realism and multi-input prompting. The episode also discusses ecosystem dynamics: open and hybrid releases (GLM-5, Qwen3 Coder Next, DeepSeek), adapter efficiency (Tiny LoRA), reinforcement-style world-model learning for agents, and the security and evaluation challenges that accompany rapid rollout.

Feb 16, 2026Episode ID: 17845

20VC: SaaS is Dead: Why Systems of Record Will Die in an Agentic World | What Revenue Multiple Will Software Companies Trade At? | From 7,000 to 3,000: We Need Less People Than Ever with Sebastian Siemiatkowski

The episode centers on how AI—especially agentic systems and large language models—is collapsing the marginal cost of software creation and eroding traditional SaaS moats by lowering data switching costs. Sebastian explains why companies must build deep, contextual AI (Klarna’s in-house customer service that replaced ~600 agents) rather than rely on off-the-shelf solutions, and how that drove productivity gains and a headcount reduction from ~7,000 to ~3,000. The conversation covers implications for valuations and revenue multiples in software, the strategic choices around BNPL versus revolving credit, and competitive dynamics with fintech challengers like Revolut and Nubank. Anecdotes on fundraising (winning Sequoia and Michael Moritz) and a call for investors and CEOs to

Feb 16, 2026Episode ID: 17844

Prompt Management, Tracing, and Evals: The New Table Stakes for GenAI Ops

The episode outlines the operational foundations required to run reliable, cost-effective LLM-powered applications, focusing on observability, prompt management, and evaluation workflows. Aman Agarwal presents OpenLit's OTEL-first approach to convert opaque model interactions into stepwise traces, enabling debugging across models, tools, and data stores. He emphasizes common blind spots—runaway token costs, brittle prompt/secret handling, and lack of reproducible experiments—and shows how vendor-neutral standards and centralized collector management (OPAMP) reduce lock-in. The conversation also covers experimentation patterns (multi-model comparisons, routing), closing the loop from evals to prompt/dataset improvements, and trade-offs where OpenLit may not fit (proprietary stacks or hosted SaaS requirements).

Feb 15, 2026Episode ID: 17843

Full Tutorial: Use AI Agents for Coding AND Product Management | Eno Reyes (Factory)

The episode is a deep dive into Factory's AI coding agent, Droid, emphasizing an enterprise-first approach with controls, ROI analytics, and multi-surface integrations. Eno Reyes demos building an app from meeting notes and contrasts 'spec mode' (what to build) with 'plan mode' (how to build it), showing how agents fit into real engineering workflows. The conversation covers model-agnostic strategies (mixing planners like Opus with executors like GPT-5.2), rigorous self-validation (linters, tests, screenshots) to raise output quality, and practical choices about skills, MCPs, and hooks. It also explores organizational implications: hiring 'product engineers' over traditional PMs, autonomy vs manual approval trade-offs, and how a small focused team competes with larger players in the AI coding space.

Feb 15, 2026Episode ID: 17842

Sequoia CEO coach: Why it’s never been easier to start a company, and never been harder to scale one | Brian Halligan (co-founder, HubSpot)

Brian Halligan argues that while it has never been easier to start a company thanks to low friction (tools, capital, talent), scaling a durable, high-impact organization has become harder and requires a different skill set. The CEO role now demands faster, higher-quality decision-making because increased optionality and rapid iteration create a cognitive tax on choices. Halligan emphasizes hiring and team design as the central scaling levers—promoting homegrown talent, favoring "spikier" candidates, using interactive interviews and blind references, and adopting a hire-slow/fire-fast mentality. He shares Sequoia’s LOCKS framework for evaluating founders (Lovable, Obsession, Chip, Knowledgeable, Student), practical coaching habits, and his view that AI will augment but not immediately replace core enterprise sales motions.

Feb 15, 2026Episode ID: 17841

Something Big Is Happening

The episode centers on the viral Matt Schumer post (≈80M views) arguing that AI has already transformed work inside tech and is poised to spread more broadly. It reviews evidence for a rapid acceleration in 2025—shorter model release cycles, better models for coding (GPT-5-3 Codex, Opus 4.5) and agentic stacks that can build end-to-end products. The host parses the backlash—technical critiques, accusations of overclaiming or AI-authorship, and the distinction between top-tier paid models vs. consumer experiences—and highlights which critiques are useful. Framing the debate, the episode contrasts “tool-shaped objects” (highly polished outputs that may be consumption-driven) with genuinely productive automation, and concludes with practical advice: be early, use high-quality models, and build adaptability as your durable advantage.

Feb 15, 2026Episode ID: 17840

20Sales: Inside ElevenLabs $330M ARR Sales Machine | The 20x Sales Comp Plan Reps Must Hit | How to Land and Expand in a World of AI | Why Product-Market-Fit is BS, Reps Should Not Be in the Office and Outbound is King with Carles Reina

This episode profiles Carles Reina’s playbook scaling ElevenLabs’ revenue org from zero to over $330M ARR in three years, emphasizing aggressive quotas, fast customer-facing selling, and a land‑and‑expand GTM. Carles argues for ruthless accountability (quota = 20x base), immediate impact from new hires (first contract within two weeks), and heavy emphasis on outbound and field selling rather than office-bound or purely PLG approaches. The conversation covers compensation design, pipeline discipline (public pipeline reviews, conservative forecasting), and practical onboarding/training to create repeatable expansion motion. The hosts also discuss AI-native sales tooling (ROX, Monaco) that automates TAM building, outreach sequencing, meeting capture, and follow-ups, illustrating how tech augments seller productivity.

Feb 14, 2026Episode ID: 16875

Amazon’s Ring canceled their partnership with Flock

The episode covers three linked news arcs: Amazon-owned Ring canceled a planned integration with Flock Safety after public scrutiny about Flock's law-enforcement connections and privacy concerns; OpenAI is retiring several legacy chat models (including GPT-4O and GPT-5 mini variants), a move framed as low-usage cleanup but affecting a non-trivial absolute number of users; and Anthropic’s Super Bowl ads plus a new Opus 4.6 model coincided with a measurable spike in Claude app downloads and App Store ranking. Discussion of the Ring–Flock cancellation highlights prior Ring security and FTC issues and the broader debate over when consumer-facing AI camera features cross into mass surveillance. The OpenAI segment frames model deprecation as product lifecycle and risk-management, while noting user pushback about losing access. The Anthropic item illustrates how high-profile marketing combined with product updates can deliver rapid user-acquisition lifts for consumer AI chatbots.

Feb 14, 2026Episode ID: 16874

OpenClaw is Our Friend Now | E2250

This episode explores the emergent world of persistent AI agents built on OpenClaw through demos of three projects: AntFarm (multi-agent orchestration), Clawra (an intimate AI companion), and RentAHuman (agents hiring humans for IRL tasks paid in stablecoins). Guests and hosts discuss why OpenClaw agents feel “alive” — persistence, single gateway control, and multi-channel state — and contrast that with session-based LLMs. The conversation covers early productivity metrics (≈10% chores offloaded in two weeks with optimistic projections to 50–60%), agent architectures (Ralph Wiggum loops, replicants), verification patterns, and security tradeoffs (sandboxing vs deeper integrations). Ethical and social implications are woven throughout: framing companions as non-sexual real friends, concerns about removing humans from loops, monetization of attachment, and marketplace governance for hybrid human/agent workflows.

Feb 14, 2026Episode ID: 16873

The Time Savings Era of AI Is Over

The episode reviews results from the AIDB January AI Usage Pulse survey, arguing that AI value is shifting from simple time savings toward increased output and entirely new capabilities. Heavy users are adopting agentic workflows and multi-model portfolios, with Claude emerging as the most common primary model for builder- and agent-oriented use cases. Vibe coding and low-code/no-code creation have spread beyond engineering, enabling executives, operators, and product teams to build their own AI-driven tools. The conversation highlights vendor case studies (e.g., Blitzy) and new product offerings (e.g., Superintelligent's AI Strategy Compass) as evidence that tooling maturity is accelerating enterprise transformation.

Feb 13, 2026Episode ID: 16872

Dario Amodei — "We are near the end of the exponential"

Dario Amodei argues that recent AI progress follows a ‘‘big blob of compute’’ scaling regime and that we are nearing the end of the exponential run-up in capability, with substantial capability milestones plausibly arriving within a few years. He outlines the factors that drive long-run progress—compute, data quantity and quality, training time, scalable objectives, and numerical conditioning—and notes that reinforcement learning shows scaling and generalization patterns analogous to pretraining. The conversation distinguishes raw capability growth from economic diffusion, emphasizing that procurement, integration, regulation, and security slow deployment even when models rapidly improve. Anthropic’s commercial strategy and financial caution (balancing aggressive compute investment against bankruptcy risk) are discussed alongside projections for rapid revenue growth and the importance of forecasting demand to preserve profitability.

Feb 13, 2026Episode ID: 17889

AI incidents, audits, and the limits of benchmarks

The episode examines the gap between research benchmarks and real-world AI safety, drawing on Sean McGregor’s work with the AI Incident Database and the AI Verification & Evaluation Research Institute. It emphasizes that practical AI is defined by systems that produce real-world consequences, and that benchmarks and lab tests often fail to predict brittle failures in deployed systems. The conversation covers sourcing and classifying incidents, challenges of voluntary reporting versus potential mandatory reporting, and the scale trade-offs of indexing many small harms versus focusing on high-impact events. The hosts also discuss the role of third-party audits, lessons from red-teaming (e.g., DEF CON exercises), and the need for new evaluation approaches for general-purpose models and composed systems.

Feb 13, 2026Episode ID: 16871

AI incidents, audits, and the limits of benchmarks

The episode explores how AI is transitioning from research to consequential real-world deployment, focusing on incident reporting, auditing, and the limits of benchmarks. Sean McGregor describes the AI Incident Database—its scale, harm-based definition of incidents, and sourcing challenges—and argues that collected incidents create learnable datasets akin to aviation or medical adverse-event reporting. The guests examine how general-purpose LLMs (e.g., GPT-like models) break traditional safety assumptions, making exhaustive verification infeasible and increasing the need for domain-specific pilots, red-teaming, and meta-evaluation of benchmarks. They also discuss practical governance questions: voluntary versus mandatory reporting, the utility and limits of benchmarks and leaderboards, and the growing role of third-party audits to validate vendor claims.

Feb 13, 2026Episode ID: 16870

‘Something Big Is Happening’ + A.I. Rocks the Romance Novel Industry + One Good Thing

The episode examines a perceived inflection point in public sentiment and market awareness about AI, arguing that recent agentic models and plugin tooling are lowering the bar for non-experts to automate complex workflows. Hosts connect those technical advances to real economic signals — notably sharp SaaS sell-offs — and argue that agentic AI can shift business models from seat- or license-based pricing to outcome- or usage-based arrangements. They highlight high-probability disruption in document- and hour-billed industries (especially legal and compliance) and discuss claims that models are accelerating their own development cycles, compressing product timelines. The conversation also explores cultural and legal fallout in publishing, using the romance-novel industry as a case study for mass content generation, disclosure debates, and copyright concerns, and closes with smaller segments on Spotify’s prompted playlists and Google’s Perch 2.0 bioacoustics model.

Feb 13, 2026Episode ID: 17839

Balaji and Dan Wang: The Engineering State vs Lawyerly State

Balaji Srinivasan and Dan Wang contrast an "engineering state" (China) with a "lawyerly state" (U.S.), arguing that China’s state-directed industrial surge has produced world-leading manufacturing in sectors like EVs, solar, ships, and robotics while the U.S. remains dominant in software and finance. They discuss political and social cracks in China — protests, property troubles, youth unemployment, and cadre incentives revealed by promotions like Li Qiang — which complicate the narrative of unbroken industrial ascent. The conversation interrogates whether software valuations and financial engineering can substitute for an industrial base in sustaining great-power status, and raises the strategic role of digital borders (e.g., the Great Firewall) in protecting sovereignty. Controversial points include claims that China may be "messing up less" than the U.S., and normative forecasts about Bitcoin and monetary alternatives, with broader implications for builders, entrepreneurs, and geopolitics.

Feb 13, 2026Episode ID: 16868

Why J-Cal Invested to 200K in a former Employee | E2249

This episode features two founder pitches and deep dives: Presh Dineshkumar presents Tempo from The Wellness Company (backed by a $200K investment from Jason Calacanis) and Peter Cetale introduces Sourcerer, an AI-driven sourcing platform. The conversation with Presh focuses on aggregating wearable and lab data into a composite HealthSpan score and using AI-templated protocols, groups, and IRL experiences to drive behavior change and retention. Jason emphasizes product velocity, world-class design, and building community features (families/cohorts and real-world meetups) as key levers for stickiness and monetization beyond subscriptions. The Sourcerer segment covers AI agents for supplier outreach, demand aggregation to cut COGS, blind escrow to prevent circumvention, and the broader impact of AI on sourcing workflows and engineering hiring.

Feb 12, 2026Episode ID: 16867

How I Built My 10-Agent OpenClaw Team

The episode walks through the host’s experience building and running a 10-agent digital employee stack using OpenClaw, describing the architecture, file conventions, scheduling (heartbeats), and real-world value and limitations. Nathaniel emphasizes pragmatic choices — running agents locally on a modest Mac Mini, using Agents.md and Memory.md to codify behavior and long-term context, and managing agents via chat apps for mobile control. He advocates using an interactive AI build partner (e.g., Claude/Claude Code) over passive tutorials to speed non-technical onboarding and incremental troubleshooting. The conversation covers tradeoffs around system access and security, which agents deliver the most ROI, ecosystem/network effects of OpenClaw, and practical expectations for initial negative ROI and iterative improvements.

Feb 12, 2026Episode ID: 16866

The surprising case for AI judges

The episode examines the development and implications of the AI Arbitrator, an AI-assisted arbitration platform built by the American Arbitration Association for narrowly scoped, documents-only construction disputes. Bridget McCormack and the host discuss why AAA chose a limited, human-in-the-loop design—grounding agents in domain-specific handbooks and historical case libraries—to reduce risk from hallucinations and credibility assessment errors. The conversation weighs potential benefits, notably expanding access to justice by lowering cost and friction for routine disputes, against concerns about transparency, fairness, and who controls the system. The episode also explores broader institutional questions about trust, the limits of automation (e.g., not suitable for criminal or government actions), and the safeguards needed to audit and de-bias AI-driven decisions.

Feb 12, 2026Episode ID: 16933

“Engineers are becoming sorcerers” | The future of software development with OpenAI’s Sherwin Wu

Sherwin Wu and Lenny discuss how AI — especially Codex, Cursor, and agents — is transforming software engineering from writing code line-by-line to orchestrating fleets of AI agents that execute intent. OpenAI dogfoods these tools heavily (≈95% daily Codex usage; 100% of PRs reviewed by Codex), producing measurable productivity gains (code reviews cut from ~10–15 minutes to 2–3 minutes; heavy users open ~70% more PRs). They warn builders to design for where models are headed, not where they are today, because evolving capabilities will subsume brittle scaffolding and custom glue code. The conversation covers organizational impacts (widening productivity gaps, changing manager roles), operational risks (agents failing, tribal knowledge capture), and product/market implications (one-person startups, business process automation, and platform strategy). Practical guidance includes experimenting now, investing in documentation and guardrails, and favoring API-driven, flexible interfaces and evals for deployment safety.

Feb 12, 2026Episode ID: 16864

Mistral AI vs. Silicon Valley: The Rise of Sovereign AI

The episode features Timothée Lacroix of Mistral AI discussing the company's evolution from an open-source research lab into a full-stack sovereign AI provider that builds models, platform tooling, deployment stacks and its own large-scale compute (Mistral Compute). Lacroix explains the rationale for owning infrastructure—stability, scale, and data sovereignty—and how that positions Mistral against hyperscalers while enabling European/sovereign deployments. The conversation emphasizes enterprise realities: POCs often fail without tooling, governance and Forward Deployed Engineers (FDEs) to productionize workflows, and that control (ownership of stack and data) is a primary enterprise requirement. He takes a contrarian stance on agents, reframing them as building blocks in observable, versioned workflows where trust, governance and observability matter more than autonomy, and dives into technical trade-offs (Mistral 3 architecture, dense vs MoE, synthetic data, post-training pipelines).

Feb 12, 2026Episode ID: 16862

Anish Acharya: Is SaaS Dead in a World of AI?

Anish Acharya argues that the headline "SaaS is dead" and the claim that AI will "vibe-code everything" are overstated — AI is transformative but software is being oversold and many core enterprise systems are poor targets for wholesale recoding. He explains how coding agents and orchestration reduce switching costs, eroding some incumbent lock-in and enabling startups to compete more effectively. Value is likely to concentrate in an apps/aggregation layer that composes specialized foundation models rather than a single foundation model capturing all downstream value. The episode covers practical limits of agents, revenue durability risks from rapid feature cannibalization and open models, product strategy trade-offs (boring vs weird), and implications for founders and investors in the new AI-native product cycle.

Feb 12, 2026Episode ID: 16861

Rivian’s Roadmap to AI Architecture and Autonomy with Founder and CEO RJ Scaringe

RJ Scaringe outlines Rivian's strategic reset from rules-based autonomy to an end-to-end neural-net architecture and a vertically integrated data stack. The company rebuilt its perception, compute, and data pipelines (Gen2) to enable large-scale model training, onboard inference, and a continuous training loop fed by its growing fleet. Rivian is designing its own inference chip to reduce the per-vehicle cost of real-time neural-net driving and is shifting vehicle electronics to a software-defined, zonal architecture to enable fast OTA feature development. The conversation also covers product strategy (including the upcoming R2) and a broader vision of cars as software platforms that deliver ongoing feature improvements and differentiated customer experiences.

Feb 12, 2026Episode ID: 16860

20VC: Anthropic's Superbowl Ad: Who Won - Who Lost | Harvey Raises $200M at $11BN Valuation | Sierra Hits $150M in ARR: Is Customer Support Too Crowded

The episode debates the size and nature of the AI opportunity, centered on Anthropic's projection of ~$149B ARR by 2029 and how that stacks against OpenAI and the broader software market. Guests unpack revenue-stacking (cloud, chips, ISVs, consultancies), multi-model deployment strategies, and whether AI spending is additive (time expansion) or zero-sum with existing software budgets. They discuss practical go-to-market implications for founders and operators — the need to deliver clear product ROI, simplify agent deployments, and avoid solutions that only experts can operate. The show also covers category-specific opportunities and risks (legal tech, customer support), recent fundraising events (Harvey, Sierra), the marketing theatre around Super Bowl ads, and leadership trade-offs for CEOs in an AI-driven era.

Feb 12, 2026Episode ID: 16859

#491 – OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger

The episode traces the origin, rapid virality, technical design, and societal implications of OpenClaw — an open-source, agentic AI assistant that runs locally and interfaces with messaging clients and multiple LLM backends. Peter Steinberger recounts building a prototype quickly, the project's explosive GitHub growth, naming/operational crises during the launch, and how community contributions shaped features and personality. The conversation digs into agent architecture (agentic loops, skills/plugins, CLI-first integration), model choices (Codex, Claude Opus, GPT variants), and practical developer workflows for debugging and orchestration. Throughout, the hosts balance excitement about agent-driven productivity and new UX paradigms with sober discussion of security risks (sandboxing, prompt injection), platform friction, and the trade-offs of open-source virality.

Feb 12, 2026Episode ID: 16858

Showing 100 processed episodes (with AI insights). More episodes appear as analysis completes.