AI + a16z
EpisodeAI + a16z

The Best Way to Achieve AGI Is to Invent It

Nov 4, 2024
Listen Now

Summary

In this episode, Pedro Domingos, a seasoned machine-learning researcher, and Martin Casado delve into the intricacies of artificial intelligence and the pursuit of artificial general intelligence (AGI). Domingos likens the journey toward significant AI advancements to Einstein's lengthy development of general relativity, suggesting that we should not expect swift results. The conversation examines the role of language models as world models, the ongoing debate about the understanding capabilities of large language models (LLMs), and the dichotomy between AI risks and innovation-progress. Key themes include the importance of prioritizing substantive AI challenges over distractions, a critique of current research dependencies, and the fluctuating funding landscape that contrasts with past tech booms. They also discuss the potential threat of an AI winter, the need for new ideas and models, the implications of relying on synthetic data, and differing opinions on the timeline for achieving AGI, ultimately painting a complex picture of AI's future.

Key Takeaways

  • 1The development of AGI could take substantial time, similar to Einstein's general relativity journey.
  • 2Language models represent a new paradigm by serving as both linguistic and cognitive frameworks.
  • 3Domingos highlights the diverse expectations surrounding the timeline for achieving AGI.
  • 4Pursuing innovative ideas is crucial amidst the risk of research stagnation.
  • 5The debate surrounding the reliance on synthetic data reflects historical challenges.
  • 6The importance of maintaining focus on significant ethical concerns in AI is paramount.
  • 7Recognizing the potential for an AI winter could guide sustainable progress.
  • 8Creative versus reliable AI debate highlights varying capabilities in generative contexts.
  • 9Current AI funding contrasts sharply with past tech boom models.

Notable Quotes

""Einstein took 10 years to come up with general relativity, and that was just one equation. So what makes you think we're going to solve AI in six months?""

"If we worry about AI, it'll turn out well. And if we don't, it won't."

"The best way to predict the future is to invent it."

"In fact, my hope right now is that because the hype has gotten away from reality is that we will make enough progress in the next two years to justify the hype that is there."

"That's just a very interesting accident of history."

"The brain experiences the world. We make abstractions in that world."

"I am generally extremely skeptical of a notion that synthetic data or simulation is going to get us there because we have 50 years of experience in AI of that not working."

"You can enforce structuring in AI suggestions, but the accountability around those suggestions is where companies might struggle."

"But the problem is that the reward function there is surprisingly sharp, because you don’t want to be our Canada and they have a, you know, a return policy."

""So there's a lot of mileage to be had, for example, in making transformers faster and more efficient. All of that is good. But at the end of the day, that is not going to get us to human-level intelligence, I don't think.""

"Now, the thing about all this that is slightly ironic is that people in symbolic AI already know how to do that stuff very well."

"So, you know, animal farm is not about animals are farming."

"I think we can use AI to build a better democracy than we have today."

"Now, if you want to pass, say, math exams, you need to understand what's being said."

""Even just getting 10% to AGI is going to change the world dramatically.""