Unsupervised Learning

Ep 67: Max Junestrand (CEO, Legora) on Differentiating and Pricing AI Apps & How the Legal Industry Will Evolve

May 27, 2025
Listen Now

Summary

In this episode, Max Junestrand, CEO of Legora, shares insights on how AI is revolutionizing the legal industry through end-to-end automation of complex workflows. Legora has shifted from early experimentation with foundational models like GPT-3.5 to building comprehensive AI platforms that integrate tool and function calling frameworks to automate tasks such as due diligence, contract drafting, and document review. The legal sector’s fragmented nature and diverse workflows require flexible, integrated platforms rather than narrowly focused tools. Client demand for faster, more cost-effective legal services is a major driver for AI adoption, pressuring law firms to innovate or risk losing competitiveness, despite challenges posed by traditional hourly billing models. Legora’s strategic entry in the fragmented Nordic legal market allowed them to mature their product comprehensively before expanding to larger markets like the U.S., demonstrating a smart second-mover advantage. Instead of building proprietary LLMs, Legora leverages powerful foundational models combined with domain expertise and workflow integration to accelerate development and reduce costs. High lawyer adoption rates stem from carefully aligning products with user workflows and proactive education, contrasting with typical low adoption of enterprise software. Major AI labs like OpenAI, Anthropic, and Google are evolving into full platforms offering not just models but also integration tools, shifting the AI development landscape. Product strategies balance scaffolding features like rule-based playbooks to overcome current model limitations with future-proof design anticipating evolving AI capabilities. Legal AI requires transparent citations to maintain trust, a feature Legora prioritized early but is prepared to defer to evolving model-level citation APIs. Pricing AI applications remains challenging due to unpredictable usage and rising model costs, pushing companies toward hybrid pricing models combining seat and usage fees. Enterprise integration with standard legal software and client-specific databases expands AI’s ability to streamline workflows and deliver tailored legal solutions. Throughout, Legora emphasizes rapid iteration, measured market entry focused on quality, and collaboration with design partners, highlighting the nuanced balance between speed, reliability, and user trust in building AI tools for highly regulated, complex industries. The episode also touches on evolving lawyer roles as AI shifts routine tasks toward entrepreneurial management of AI agents, reflecting broader future workforce transformations.

Key Takeaways

  • 1Legora exemplifies the maturation of AI applications in the legal sector by transitioning from experimental AI use to delivering fully integrated, end-to-end workflows that automate complex legal tasks such as due diligence and contract review. This is facilitated by leveraging foundational large language models (LLMs) like GPT-3.5, combined with frameworks for function and tool calling that empower AI agents to autonomously execute multi-step workflows.
  • 2The legal industry’s inherent fragmentation and diversity of workflows—from simple data extraction to complex document drafting—necessitate AI platforms that are flexible and broadly capable rather than narrowly specialized for individual tasks. By building integrated platforms encompassing search, drafting, and review functionalities, companies like Legora address this complexity directly.
  • 3Client-driven market pressures and inefficiencies intrinsic to the traditional hourly billing model create strong incentives for law firms to adopt AI automation to stay competitive. While hourly billing may disincentivize efficiency internally, external client demand for faster, lower-cost services forces firms to leverage AI tools aggressively.
  • 4Legora’s deliberate choice to originate in the fragmented Nordic legal market provided a strategic second-mover advantage, allowing them to develop a broad, ambitious AI platform capable of addressing various legal workflows before scaling into larger markets like the U.S.
  • 5Rather than investing heavily in training proprietary legal LLMs, Legora has embraced a pragmatic model of building powerful applications atop leading foundational models from providers like OpenAI. This strategy focuses differentiation on domain expertise, workflow integration, and user experience rather than on expensive and slow-to-build custom models.
  • 6Successful adoption of AI tools by legal professionals hinges on organic, demand-driven uptake rather than top-down IT mandates. Legora achieves high lawyer adoption rates (70-80%) by aligning AI functionalities closely with legal workflows, demonstrating clear efficiency gains, and providing education to ease integration into daily practice.
  • 7The evolution of AI labs like OpenAI, Anthropic, and Google from pure model providers to integrated platform companies dramatically impacts AI product development. These labs now offer not only advanced foundational models but also APIs, system integrations, and orchestration frameworks that enable developers to build complex AI applications more efficiently.
  • 8Legora’s 'playbooks' feature exemplifies balancing immediate product value with future-proofing by encoding explicit negotiation rules as scaffolding around current AI model limitations, while planning to phase out such scaffolds as LLMs improve to autonomously parse and act upon documents and instructions.
  • 9In legal AI applications, citations linking AI-generated outputs back to original source documents are essential to maintain transparency, trust, and auditability. Legora integrated citation features early to meet these stringent requirements but remains ready to adopt native citation APIs introduced by foundational model providers, reflecting a flexible product development stance.
  • 10Pricing AI products in legal tech is challenged by unpredictable user behavior resulting in volatile and sometimes exorbitant LLM usage costs, necessitating hybrid pricing models that combine seat-based fees with usage-based components to manage risk and align incentives.

Notable Quotes

"The early BERT models from Google. And I mean, they were decent in English, horrendously bad in Swedish. And this was back in 2020. So when GPT arrived 3.5, that was like the paradigm starter, if you will. Since then, I think we've moved from full on experimentation and trying to get stuff to work into actually implementing things that are well, really taking like end to end work deliverables."

"Like if you're doing a due diligence today, you're not going physically into the data room. You're not using control F. You're just taking all the documents, putting them in Legora, saying what you want to find, and then it finds it. And then based on those findings, we generate the report. And so things are really starting to move from empty queries against like a data set to, okay, this is the process that we want the LLM to follow."

"The legal software space has been incredibly fragmented. That was like one of the early things I saw coming from outside the industry. You had one tool for translations, one tool for document comparisons, another tool for searching, and another one for reviewing. And now suddenly, all of this is kind of getting baked together."

"It's a bit of a prisoner's dilemma in that if any of your competitors do something, you have to adjust to keep up with the pace of play. So even if you theoretically have the mindset that you're talking about, if someone else does it, then you're certainly suddenly inefficient or you're billing for things that other people won't."

"I think one thing that's really powerful about your products, as you just alluded to, is just, you know, how broad the capabilities are. And you really are able to serve kind of end to end at these law firms."

"You can think about the models, but I also think it's useful to think about these AI labs as platforms and software companies, because frankly, OpenAI, Claude, Gemini, they're for sure, you know, model providers, but they're also increasingly product companies. Anthropic is building out Claude to connect with other systems. Google is building out Gemini to sort of sit on top of the entire Google workspace stack. And they're not only pushing innovation on the models themselves, but also on the way that they interact with other pieces of the software and system."

"We just released a new feature called playbooks, where you kind of give Legora like a set of rules and typical fallbacks and points of how it's supposed to negotiate. And let's say you outline 20 different rules for how to negotiate an NDA or an MSA. Now, Legora needs to go through them one by one by one to make really high quality edits. But if the models become so good that you just say, hey, here's a playbook in a Word document or in an Excel file, take this under NDA and, you know, cross reference them and give me all the red lines, then you don't really need to build the feature that way."

"We've built citations really, really early on because it's like critical for our use case and for lawyers to be able to reference and see, okay, the LLM made this response and it references to this material and this specific text chunk. But, you know, if the LLM providers give that for free in half a year, then we'll just deprecate our entire code. I mean, my way of thinking about it is if somebody else is doing it, then let's use that."

"But then just the other week we had a user rack up 10,000 bucks in LLM costs. So I think over time, it's not hard for me to imagine where you'd have kind of a platform fee and some usage element to that. But it also depends on how the LLM providers themselves develop their pricing. And I think the early thoughts around all of this was LLM prices will continue to go down. And so if LLM prices continue to go down, we're willing to take a bit of a hit in the beginning to get positive and better margins over time. Now that's not really happening because the LLM models are becoming so much better, but also more expensive."

"I think the law firms themselves and the in-house teams are kind of figuring out what they want as well. So we're all trying to figure out exactly the best way to apply the technology. And then we need to, we need to align on, on a future that makes everybody win."

"Marc Thiessen: "Basically, a private and compliant chat GPT with better rag on their own documents and Swedish legislation. But that was good enough for that time. And then, you know, every single week, the bar increases for where you need to be to be best in class.""

"Marc Thiessen: "You get one chance. You get one chance with, especially with, I don't know if this is especially with lawyers, but I say developers, if you're building, you know, vibe coding software, they're easier to approach once and then say, oh, we have some new updates. Do you want to come and try again? Because they like that idea of you're starting somewhere and you're building and you get to come along the journey.""

"Marc Thiessen: "And then just being able to serve like thousands of users on the platform a day. Those were problems you had to solve in the beginning. And if you don't solve them and you try to onboard a firm like Cleary Gottlieb, it's not going to go so well.""

"Marc Thiessen: "Five million. And, you know, look these guys in the eyes and say, yeah, we're not going to sell for the next, you know, four or five months.""

"Marc Thiessen: "I'm so in the legal world that it's like hard to think outside of it. But just some of the things that I'm, you know, passionate about outside of this. I think CROs in pharma has like the biggest. It's one of the biggest disruption opportunities in the world, because it's so manual. There's so much data. My one of my family members works in with clinical trials.""