The MAD Podcast with Matt Turck

Jeremy Howard on Building 5,000 AI Products with 14 People (Answer AI Deep-Dive)

May 15, 2025
Listen Now

Summary

The podcast episode features Jeremy Howard discussing Answer AI's ambitious goal of building thousands of AI products with a small, nimble team of around 14 people. Central to their approach is a unique dialogue engineering system that integrates tools like Cursor, Cloud Code, ChatGPT, and Jupyter Notebooks into a collaborative human-AI platform that enhances productivity far beyond individual components. Jeremy emphasizes the efficiency born from streamlined workflows, innovative frameworks such as FastHTML for Python-based web app development, and an organizational model lacking conventional hierarchy which relies on AI and automation as foundational substrates. Answer AI prefers open-source AI models like DeepSeek and Qwen due to their flexibility and cost benefits, noting the ongoing geopolitical shift favoring Chinese open-source initiatives over more closed U.S. counterparts. Despite public hype around moments like DeepSeek's viral breakthrough, Jeremy urges skepticism, framing such events as perception phenomena rather than fundamental leaps in AI capability. Furthermore, he highlights the untapped potential of test-time (inference) compute optimization as an important battleground for real-world AI efficiency gains. Jeremy is cautious about near-term AGI or ASI, attributing many perceptions of AI advancement to improved natural language interfaces rather than substantive intelligence transformations. He critiques autonomous AI agents like Devin for their unpredictability compared to the more effective collaborative dialogue approach. The episode also recounts Jeremy’s unconventional journey from philosophy and self-teaching to becoming a Kaggle champion and AI educator with Fast.ai, underscoring the democratization of AI learning. Solve It, Answer AI’s platform for iterative problem-solving and training, exemplifies their mission-driven approach, having helped users achieve meaningful life changes. The team enhances communication and AI collaboration via embedding AI tools in platforms like Discord, further supporting rapid product cycles. Throughout, there is a strong theme advocating for small, mission-focused teams leveraging open source, agile methodologies, and innovative tooling to maximize societal benefit from AI, challenging prevailing ideas that large teams and proprietary models are requisite for AI success.

Key Takeaways

  • 1Answer AI’s pioneering dialogue engineering system fuses multiple AI and development tools—Cursor, Cloud Code, ChatGPT, and Jupyter Notebooks—into a synergistic platform that fosters interactive co-creation between humans and AI. This integration enables their small team of 12-14 to generate 5,000 to 10,000 commercially viable AI products with remarkable speed and efficiency.
  • 2Answer AI successfully challenges conventional scaling paradigms by building thousands of AI products with only a dozen or so people. This feat is enabled by a rigorous focus on operational efficiency, utilizing foundational technologies like FastHTML and Monster UI, and adopting a lean startup mentality emphasizing rapid problem-solving and practical increments.
  • 3Solve It exemplifies an AI training and problem-solving platform built on dialogue engineering principles, facilitating iterative learning alongside AI assistance. Hundreds of users have reported life-changing results, ranging from landing jobs to launching companies, underscoring the essential role of embedded training and skill development in AI product adoption.
  • 4Shell Sage is a minimal yet powerful terminal-integrated AI assistant that leverages tools like T-Max to maintain persistent session context and command history, enabling expert-level debugging and problem-solving directly within the terminal environment.
  • 5FastHTML, combined with HTMX and M expressions, provides a novel web development framework that enables developers to craft rich, server-rendered web applications entirely in Python, circumventing traditional, complex front-end stacks and facilitating rapid iteration.
  • 6Answer AI’s organizational philosophy deliberately eschews traditional hierarchical structures, embracing a flat, role-less composition where AI, automation, and connective workflows constitute the operational substrate underpinning rapid, adaptive problem-solving.
  • 7Answer AI leverages Discord as a core collaborative platform augmented by embedded AI tools with access to comprehensive communication history and external knowledge sources like GitHub, creating an integrated environment for real-time AI-assisted team coordination and development.
  • 8Answer AI’s operational reliance on open-source AI models like DeepSeek and Qwen reflects a strategic commitment to customization, cost efficiency, and transparency, diverging from dependency on proprietary closed-source systems which often restrict tuning and flexibility.
  • 9There is a significant geopolitical shift in AI innovation, with Chinese open-source AI initiatives currently outpacing U.S. commercial counterparts due to a more collaborative and pro-social development culture that accelerates progress through community engagement.
  • 10Despite widespread excitement, Jeremy Howard urges caution regarding the imminence of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), attributing much perceived breakthrough to improved natural language interfaces that alter human perception without corresponding leaps in underlying AI capabilities.

Notable Quotes

"We've basically built a system to allow you and the AI to together construct a dialogue. Imagine bringing together like Cursor and Cloud Code and ChatGPT and Jupyter Notebooks smushed together. It's kind of got all of that functionality but then when you do that you end up with something way more than the sum of its parts. If we're going to have 12 to 14 people create 5,000 to 10,000 extremely commercially successful products."

"Two of the three models that we rely on in our day to day production stuff are open source. Being Qwen and DeepSeek, I guess we've decided we like them very much. They give us a level of flexibility to tune things to be exactly the way we want, which we can't get from commercial models."

"The AI landscape is witnessing a notable shift with all the best open sources coming out of China now. So they seem to have a much more kind of pro-social approach to this. Whereas, oddly, the U.S. companies that used to lead the way have all drawn up the drawbridges. And that's going to cause China to keep moving faster. Because when you're in that more both collaborative and competitive environment, you just go way ahead, as we've seen in every other area of open source software over the last 30 years."

"The whole test time compute approach and all the things, it seems from your perspective as, you know, deep experts, like a very promising avenue. Yeah, it's weird that people took so long to care about that. It's another of these things that lesser known papers have been kind of indicating about for a long time is, you know, adding a few tokens to give some breathing room or thinking room or whatever is important."

"I don't think we have any more evidence that ASI might be close now than we did 15 years ago. At any of those times, you could say like, oh, ASI might be close. I think a lot of people have the impression it might suddenly be really close because when you change the user interface to a computer from a kind of interface design to be computer friendly to an interface design to be human friendly, i.e. natural language, our brains think we're dealing with a different kind of thing. But all that's changed is the user interface to that thing, you know, it's doing the same things it was always doing before."

""I think something that we've heard from lots of folks, like most notably, I guess, Jan LeCun, is that the autoregressive approach to inference seems dumb. We're starting to see some models with a diffusion feel appearing. And obviously, Jan LeCun's got his JEPA-based approaches. But that side still is more complicated. You know, the test time compute side always seemed pretty straightforward. If in 10 years' time nobody's got JEPA or diffusion or whatever to work in NLP, I wouldn't be like, oh, that must be they didn't try hard enough. Maybe it doesn't work. But I think it probably will work.""

""One thing I find fascinating about your story and your resume is that you, at university, you studied philosophy and somehow became a world-class AI scientist. Well, you were the kind of kid that was just like super great at math the whole time. And then you just chose philosophy on the side because it was kind of a different pursuit. The answer to that's kind of complicated. I did come top of my school at math. And my school was kind of the top school for math. So at one level, it's like, okay, there's some data point there that I was good at math. Another data point, though, is like I dropped out of first year math at university because I don't know, like I didn't have any of the background that anybody else had.""

""The cycle was to kind of teach best practices to as many people as we could in as, you know, usable and compelling and useful way as possible. And then in that process, identify all the places that people couldn't achieve the things that they wanted to achieve because it's too expensive or too slow or it just didn't work or whatever. Then we'd spend months doing research to try to see if we can find ways to get over those problems or find other papers that had already gotten over those problems. And then we'd spend months implementing those things in software. And then a year later, we would do another course with all of those improvements. And the cycle continues. So we did that cycle five or six times.""

""So Answer.ai is a new kind of AI research lab. And you mentioned that it was partly inspired by Edison lab. So it's definitely not a research lab. This might sound minor, but it's an R&D lab. But they feel and look extremely different, as was Edison's Menlo Park lab. It was an R&D lab. Menlo Park, New Jersey.""

""They had four or 5,000 various different products at their heyday, General Electric, all with the common theme that they used electricity. I really like that picture, you know, like it, it, it, there were many bad things I could say about Thomas Edison. But, you know, there's no question that GE made a lot of products that were valuable to society. And people were prepared to pay more to buy them than it cost to make them. So, they became profitable and successful. So, we want to do that in AI, you know.""

"Yeah, our chatter is to basically create as much societal benefit from AI as possible. That's our job as a company. And my job as a CEO is to help make a company that does that."

"And actually, that's the thing we've made money from as well as we've let a thousand people use that platform as a kind of a pre-release beta test. But it's an interesting position to be in. It's kind of a bit AWS-like."

"And one key one, for example, is that the human and the computer should be able to see all of each other's work and be operating in the same direct environment."

"Devin is the opposite of that. Devin’s approach is to hand off things to a computer and have it go do it. So maybe talk about that part. And the results were, I think, less than impressive."

"Solve it is the platform. This tool we've built for rapidly kind of creating proof of concepts or testing AI applications is called Solve it. And it uses an approach called dialogue engineering."